Cloud Services

ModernOps

Audit logs archive
Published On May 16, 2024 - 1:57 PM

Audit logs archive

Audit logs are continually pushed to the audit service by all other services using the archive feature.
The audit archival process is intended to automate this task.
This page presents an overview of the steps that need to be performed for archiving the audit logs in Kyndryl Modern Operations Applications.

Prerequisites

Before you can run the audit archival process, you must meet these prerequisites:
  • Permissions:
    The user who performs the audit archival process must have the
    Audit Admin
    role.
  • Archival endpoint:
    An endpoint must be set up to push the archive. The audit archival process has been tested only with IBM Cloud Object Storage. For more information, see Audit logs archive.
  • Archival credential:
    DevOps ingested credential will be used for the archival, if present. If it is not present, you can use the
    SYS_AUDIT_ARCHIVAL_ADMIN
    account credential.
  • System account:
    After creating the object storage, the storage credentials must be updated to the
    SYS_AUDIT_ARCHIVAL_ADMIN
    account. A system account with accountId
    SYS_AUDIT_ARCHIVAL_ADMIN
    will be created on tenant registration. Verify whether the account exists and update the credential using the following APIs.
The user must have System Admin, Provider Account Admin, or corresponding roles to perform these actions on the accounts.
  • API to fetch the account
    <account_id> SYS_AUDIT_ARCHIVAL_ADMIN
    GET cb-credential-service/api/v2.0/accounts/SYS_AUDIT_ARCHIVAL_ADMIN
    Expected response:
    { “account”: { “basicInfo”: { “accountName”: “SYS_AUDIT_ARCHIVAL_ADMIN”, “serviceProviderType”: “ibmcloud”, “serviceProviderCode”: “ibmcloud”, “isActive”: “Active”, “accountType”: “standalone”, “userType”: “system”, “credential_count”: 1 }, “accountId”: “SYS_AUDIT_ARCHIVAL_ADMIN”, “advancedInfo”: { “accountNumber”: “11121” }, “credentials”: [ ] } }
    If the account exists, you can update the credentials fields which are empty here. Otherwise you must create an account with this payload. For more information, see  Audit logs archive.
    • Update the credential of the system account using this API.
      PUT cb-credential-service/api/v2.0/accounts/SYS_AUDIT_ARCHIVAL_ADMIN
      Add the following credentials that you obtained after creating object storage to
      credentials
      : [ ] of account payload and run the API. For more information, see Audit logs archive and Audit logs archive.
      { "credentialName": "SYS_AUDIT_ARCHIVAL_ADMIN", "status": "Active", "passwordFields": { "apikey": <apikey>, "authEndpoint": "https://iam.bluemix.net/oidc/token", "bucketName": <bucketName>, "endpoint": <endpoint>, "resourceInstanceId": <resourceInstanceId>, "serviceName": "s3" } }
  • For Non-MT systems storage credentials that are pushed by DevOps, if that does not exist then credentials will be retrieved from the SYS_AUDIT_ARCHIVAL_ADMIN account.
  • Audit archive policy:
    The audit archival process runs based on a policy, which will be created based on tenant registration. You can verify it using this audit service API. If the policy does not exist, create a policy using the archival policy API. For more information, see Audit logs archive.
    GET: core/audit/api/v2.1/archivePolicies/policy
    Expected response (default policy stored):
    { "policy_type": "auditlog_archival_policy", "format": "ZIP", "periodicity": "WEEKLY", "startAt": "Y:Q:1M:6W-00:00:01", "recordsPerArchive": 100000, "retentionPolicy": { "hotRetentionPeriod": 30, "hotRetentionCount": 500000 }, "archiveEndpoint": { "type": "object_storage", "credentials": "SYS_AUDIT_ARCHIVAL_ADMIN" } }

Steps for the audit archival process

There are two methods for performing the audit archival process:
  • Automatic audit archival processes run on the basis of the stored archive policy.
  • In manual audit archival processes, the audit admin triggers the process by providing an
    'archiveUntil'
    date that the audit logs will run based on.
The Audit Admin role is required to run the audit archival process.

Automatic (policy-based) audit archival process

The automated audit archival process is triggered once every 30 minutes from the service startup time. It also checks for the policy set date.
For example, given the policy startAt field is Y:Q:28M:W-00:00:00, the archival will have these characteristics:
  • The scheduled archival is triggered on the 28th day of every month.
  • The scheduled archival wakes up every 30 mins from the service startup time.
  • The actual archive is triggered if the current date matches the startAt field set in the policy.
Assuming that system is deployed on 28th Feb 00:00:00 and the startAt field in policy is set to 28M, then the scheduled audit archival process will wake up every 30 mins and check for the policy. In this case, the setting of 28 means that the audit archival process will be triggered on 28th of March.
Automatic audit archival processes are of two types:
  • Time based:
    During the automated archival, if the
    hotRetentionPeriod
    is 30 in the policy, it will keep the logs for the last 30 days and archive the older logs.
    If the policy is not present, then it will take default
    DEFAULT_HOT_RETENTION_PERIOD
    in case of automatic archival based on the retention period. The default value is 30.
  • Size based:
    During the automated archival, if the audit log total count is more than the
    hotRetentionCount
    , the system will keep that many logs and archive the older logs.
    If the policy is not present, then it will take default
    DEFAULT_HOT_RETENTION_COUNT
    in case of automatic archival based on the retention count The default value is 100000.
Size-based audit archival processes has higher priority than time-based ones.

Manual (policy-based) audit archival process

You can trigger a manual audit archival process using the following API. The process will archive the audit logs until the
"archiveUntil"
date. The endpoint details are retrieved from the archive policy.
POST core/audit/api/v2.1/archiveUntil
Payload:
{ "archiveUntil":"2020-04-27T18:29:59Z", "mode":"manual", "filename":"05082020948AM" }
The payload contains the following fields:
  • archiveUntil:
    Date until which the audit archival process will run.
  • mode:
    The mode of audit archival process that you want to run.
  • File name:
    The name that you want to save the archival file under.
Response codes:
200
{ "message":"Audit archival Successful. Created Archive File M-5e9fe41b8a2291000160fd0c-05082020948AM- 1587738195000-1588012199000.zip", "translateCode":"CO_AUDIT_ARCHIVAL_SUCCESSFUL","translateParameters": ["M-5e9fe41b8a2291000160fd0c-05082020948AM- 1587738195000-1588012199000.zip"], "job_id":"677e31b8-65a4-4a91-9558-fd74f9bcb001" }
403
If you get this error, give the user that you are using the Audit Admin and Audit Viewer roles.
{ "message":"User 5ea1423463343f135c6584d5 cannot access route /core/audit/api/v2.1/archiveUntil", "translateCode":"CO401UNAUTHORIZED_ROUTE_ACCESS", "translateParameters":["5ea1423463343f135c6584d5", "/core/audit/api/v2.1/archiveUntil"] }
400
If you get this error, add the credentials in the SYS_AUDIT_ARCHIVAL_ADMIN account.
{ "message": "{"message": "Archival Failed. Error in creating the archive file due to error ::400 Bad Request: Credentials not found in system account", "translateCode": "CO_ARCHIVAL_FAILED", "translateParameters": ["400 Bad Request: Credentials not found in system account"]}", "job_id": "b6903e1d-f7ed-4914-85d2-0b94665d89de" }
400
If you get this error, verify the object storage credentials for a similar error response.
{ "message": "{"message": "Archival Failed. Error in creating the archive file due to error:: 400 Bad Request: Error when retrieving credentials from https://iam.bluemix.net/oidc/token: HttpCode(400) - Retrieval of tokens from server failed.", "translateCode": "CO_ARCHIVAL_FAILED", "translateParameters": ["400 Bad Request: Error when retrieving credentials from https://iam.bluemix.net/oidc/token: HttpCode(400) - Retrieval of tokens from server failed."]}", "job_id": "475a5538-c8fe-4348-9139-26ed6ad39ccd" }
400
In this case, no SYS_AUDIT_ARCHIVAL_ADMIN is present for the audit archival process. Add the SYS_AUDIT_ARCHIVAL_ADMIN account.
{ "message": "{"message": "Archival Failed. Error in creating the archive file due to error ::400 Bad Request: Unable to fetch the account details\", "translateCode": "CO_ARCHIVAL_FAILED", "translateParameters": ["400 Bad Request: Unable to fetch the account details"]}", "job_id": "dfb46d3c-53fb-4cbb-b55a-05ca7a341951" }
400
In this case, no archival policy is present. Add a new policy.
{ "message": { "message": "{"message": "Audit archival process failed at ARCHIVE_INITIATED stage. No archive policy found!!", "translateCode": "CO400_AUDIT_ARCHIVAL_INITIATION_FAILED", "translateParameters": []}", "job_id": "91a725e5-c008-4a31-94b7-73d362dac615" } }
When the audit archival process completes, you will receive a notification email that informs you whether the process was successful.
If the audit archival was not successful, audit admin can restart an archive process or mark it as completed using the following API. Before using the API, keep in mind the following guidelines:
  • Audit admin can only perform a restart if the archive process failed during the ARCHIVE_INITIATED or ARCHIVE_GENERATED stages.
  • No restart is allowed if there is any failure during the ARCHIVE_STORED or ARCHIVE_PURGED stages.
  • The Audit admin should fix these failures manually and then mark the status for each stage to “markComplete”.
PATCH core/audit/api/v2.1/archives/archiveId
Payload:
{ "action":"markComplete" }
or
{ "action":"reStart" }
Response:
Purged audit logs successfully, Audit archival job completed.
The following table shows the archive stages.
Archive stage
Status
Is error possible
Restart allowed
ARCHIVE_INITIATED
In Progress
Yes
Yes
ARCHIVE_GENERATED
In Progress
Yes
Yes
ARCHIVE_STORED
In Progress
Yes
No
ARCHIVE_PURGED
Completed
No
No
To view the file, use the following API. You will need the archival password. Get the
<job_id>
from the archival API response. The Audit user interface (UI) can also be used to view the archival details.
GET: /core/audit/api/v2.1/archives/{job_id}
The following is a sample response.
{ "startTimeStamp": 1587738195000, "archivedBy": "5e9fe4f264c0eb95c1c0cf47", "archiveInitiatedDate": 1588931313000, "endTimeStamp": 1588012199000, "archiveStageStatus": "COMPLETED", "archiveStatus": "ARCHIVE_PURGED", "archiveJobsOverallStatus": "COMPLETED", "mode": "manual", "userInputFileName": "05082020948AM", "statusLastUpdatedAt": [ "2020-05-08T09:48:36Z" ], "moreInfo": "", "CRC": "db28cc0adb87eb6e7782a802c087afb8", "archiveId": "677e31b8-65a4-4a91-9558-fd74f9bcb001", "details": { "fileName": "M-5e9fe41b8a2291000160fd0c-05082020948AM-1587738195000-1588012199000.zip", "fileLocation": "/www/app/audit_archives/M-5e9fe41b8a2291000160fd0c-05082020948AM-1587738195000-1588012199000.zip", "filePassword": <filePassword>, "fileContents": [ "/www/app/audit_archives/M-5e9fe41b8a2291000160fd0c-05082020948AM-1587738195000-1588012199000-1.json" ], "fileSize": "8.6 KB" }, "doc_type": "archive_file_details" }

Audit archival process policy settings

The following is a default policy that is stored to the audit service on tenant registration. Modify this according to your preferences.
AUDIT_ARCHIVAL_POLICY = { "policy_type": "auditlog_archival_policy", "format": "ZIP", "periodicity": "WEEKLY", "startAt": "Y:Q:1M:6W-00:00:01", "recordsPerArchive": 100000, "retentionPolicy": { "hotRetentionPeriod": 30, "hotRetentionCount": 500000 }, "archiveEndpoint": { "type": "object_storage", "credentials": "SYS_AUDIT_ARCHIVAL_ADMIN" } }
The policy includes the following fields:
  • format:
    (required) The archive format. This field defaults to ZIP.
  • periodicity:
    The interval at which the audit archival process is run. The default is WEEKLY. Currently Monthly and Weekly are supported.
  • startAt:
    (required) The date and time to start the periodic audit archival process. For example, Y:Q:2M:W-00:00:00 (runs on 2nd of every month) for MONTHLY and Y:Q:M:6W-00:00:00 (runs on Sunday of every week, weekday can be a number in range of 0-6 that is Monday to Sunday) for WEEKLY.
    Scheduled audit archival processes have these limitations:
    • Only days from 1 to 28 are supported for Monthly
    • Only the days 0 to 6 (Monday to Sunday) are supported for Weekly
    • Time is not currently supported.
  • recordsPerArchive:
    Sets the maximum number of records in one archive file. The default is 100000.
  • archiveEndpoint:
    The details for the endpoint to which the archive should be stored:
    • type:
      (required) The endpoint type. Only the object storage type is currently supported.
    • credentials:
      (required) The credentials of the endpoint. Credentials will be retrieved from the SYS_AUDIT_ARCHIVAL_ADMIN account in the credential service.
  • retentionPolicy:
    The details for the retention policy that determines how long the audit logs are retained.
    • hotRetentionPeriod:
      (required) The length of time for which the audit logs are retained in the system. The default is 30.
    • hotRetentionCount:
      (required) The number of records that can be retained. The default is 500000.
To update the policy, use the following API. The payload is identical to the default policy shown previously.
PUT: core/audit/api/v2.1/archivePolicies/policy
The following is an example response:
200
"message": "Successfully updated audit policy!!!!", "translateCode": "CO200_UPDATED_AUDIT_ARCHIVE_POLICY", "translateParameters": []
400
If you get this error, it is because
startAt
and
Format
are required fields.
"errors": { "startAt": "'startAt' is a required property" }, "message": "Input payload validation failed"
400
If you receive this error, it is because the nested fields of
retentionPolicy
and
archiveEndpoint
are required.
"errors": { "retentionPolicy.hotRetentionCount": "'hotRetentionCount' is a required property", "retentionPolicy.hotRetentionPeriod": "'hotRetentionPeriod' is a required property" }, "message": "Input payload validation failed"

Creating a SYS AUDIT ARCHIVAL ADMIN account

If the SYS AUDIT ARCHIVAL ADMIN account does not exist, then use the following API to create one. The payload is same for
PUT: cb-credentialservice/api/v2.0/accounts/SYS_AUDIT_ARCHIVAL_ADMIN
API. Add the credentials obtained from IBM object storage inside
"credentials.passwordFields"
.
POST: cb-credential-service/api/v2.0/accounts
Payload:
{ "account": { "basicInfo": { "accountName": "SYS_AUDIT_ARCHIVAL_ADMIN", "serviceProviderType": "ibmcloud", "serviceProviderCode": "ibmcloud", "isActive": "Active", "accountType": "standalone", "userType": "system", "credential_count": 1 }, "accountId": "SYS_AUDIT_ARCHIVAL_ADMIN", "advancedInfo": { "accountNumber": "11121" }, "credentials": [{ "credentialName": "SYS_AUDIT_ARCHIVAL_ADMIN", "status": "Active", "passwordFields": { "apikey": "<api_key>", "authEndpoint": "https://iam.bluemix.net/oidc/token", "bucketName": "testbucketcorex", "endpoint": "https://s3.us-east.cloud-object-storage.appdomain.cloud", "resourceInstanceId": "<resourceInstanceId>", "serviceName": "s3" }, "purpose": [ "systemIntegration" ], "context": [{ "org": [ "org_all" ] }] }] } }
Responses:
200
{ "message": "Successfully updated Account bearing id SYS_AUDIT_ARCHIVAL_ADMIN", "statusCode": 200, "translateCode": "CO200_SUCCESSFULLY_UPDATE_ACC_BEARING_ID", "translateParameters": [ "SYS_AUDIT_ARCHIVAL_ADMIN" ] }
500
If you receive an
Internal Server Error
, check the payload again to make sure that the
passwordFields
have valid values.
400
If you receive a
Bad request
message, check the payload for missing fields.
409
Account already exists
401
If you receive a
Unauthorized
error message, check whether a valid Username and Apikey is given in the headers.

Creating IBM Cloud Object Storage

If you need to create IBM Cloud Object Storage, complete these steps:
  1. Select
    Catalog
    from the menu bar.
  2. Select
    Storage
    from the
    Categories
    navigation manu or
    Search
    for
    object storage
    .
  3. Select
    Object Storage
    from the displayed options.
  4. Create one object storage.
  5. Create a bucket.
  6. Navigate to
    Bucket
    >
    Configuration
    in the navigation menu and record the public endpoint. Use that for
    "endpoint"
    field in
    "archiveEndpoint"
    .
  7. Select
    Service Credentials
    .
  8. Add a new credential.
  9. Select
    View credentials
    to get the credentials that you need.

Object storage credential mapping

Attribute
Values
apikey
api_key
bucketName
testbucket
endpoint
s3.us-east.cloud-object-storage.appdomain.cloud
resourceInstanceId
crn:v1:bluemix:public:cloud-object-storage:global:a/abcd:xyz::
serviceName
s3
Do you have two minutes for a quick survey?
Take Survey