Skip to main content

3 posts tagged with "troubleshooting-guide"

View All Tags

· 11 min read

INTRODUCTION

This guide helps you to deal with common issues and recommended solutions right from the pipeline creation to its execution. You will find them categorized into different sections. We will bring in more troubleshooting tips in our upcoming guide series.

SERVICE

Issue:

Can we use the same service name for dev, QA, and prod so that when I choose the same service, it will automatically take the same infra and execution mode, or do we need to mention all those in the cd deploy stage?

Solution:

We can use the same service to deploy in all environments, provided your infrastructure is templated/ parametrized and saved as input sets. While deploying the service, you can provide it as runtime input.

CONNECTORS

Issue:

When a user is unable to create a git connector, not sure what URL to pick don’t know where to pick(not sure what URL has to be added under 'GitHub Repository URL'). github-connector

Solution:

Pattern URL looks like this: https://github.com/<account>/<repo>

Issue:

What happens if a user selects a Kubernetes cluster that has an INACTIVE delegate? k8-cluster-connector

Solution:

delegate-setup As per the screenshot above, make sure you verify the delegate if it is CONNECTED and you select the one that is active with ‘heartbeat’.

Issue:

remote-branch

Solution:

Check the git connector. Make sure the branch mentioned is available on the remote system.

Issue:

Hi, I provisioned a Helm Connector as the Connector Artifact Server. I would like to get the details of the connector via Harness SDK. I am aware of the GitHub repo, however, I cannot find the exact function I need to use, etc. Kindly help.

Solution:

If you want to get Harness configurations or automate the creation of resources you can use the REST APIs.

Issue:

Example for a connector in Harness SDK

Solution:

Currently, the SDK doesn't have native support for connector resources but it can still be fetched using the config-as-code APIs. If you know the path to the connector you can do something like this. If you already have a connector created you can find the path by going to Setup -> Config As Code (located in the top right-hand corner). From here you'll be able to see the path to where the connector's YAML configuration. For example, the path would be something like: Setup/Artifact Servers/Harness Docker Hub.yaml. I have attached a screenshot of it to look at.

Using the method I linked to, you'll be able to get back this YAML. There's not yet a native object in the SDK that you can easily parse this into, but you can create one yourself and deserialize it. Let me know if that helps in any way. (Screenshot attached for reference) Harness-sdk

MANIFEST FILES

Issue:

What happens if a user is trying to make a deployment using helm chart inputs wrong chart version? manifest-error

Solution:

Check the chart version while adding them to the manifest step in the pipeline stage. manifest-solution

ARTIFACTS

Issue:

When I try to run the pipeline, if I select the “tags” dropdown to add a tag to the execution, I get the following error: Stage Deploy_Dev: Please make sure that your delegates are connected. Refer [docs](https://ngdocs.harness.io/article/re8kk0ex4k) for more information on delegate Installation.

Solution:

If you are using docker to pick your artifacts, then make sure you are defining the right path. Right URL: https://registry.hub.docker.com/. You can define tags by providing inputs at runtime or define them in your YAML file ex: tag: “latest”.

ENVIRONMENT

Issue:

Multiple options are available for the user to set the SERVICE

  1. If the user wants to set a fixed value:

Solution:

edit-service

  1. If the user wants to pass it at runtime:

Solution:

pass-runtime

DEPLOYMENT STRATEGY

Issue:

What if the user selects a step irrelevant to the deployment strategy? deployment-strategy

Solution:

Make sure you select this step based on the deployment strategy selected. Say if you select a canary deployment, you have a canary delete step to delete the workloads and for other deployments, we have a delete step for cleaning.

Possible errors at the time of deployment

Issue:

What happens if the user inputs an invalid timeout for a step in a pipeline stage?

Solution:

Follow the tooltip tooltip

Issue:

When you enable editing mode and then try to navigate from YAML edit-mode

Solution:

Make sure you complete all the steps in the stage. Incomplete Yaml will not be allowed to navigate or move further.

DELEGATE

Issue:

Does the Harness delegate running in Kubernetes support node architecture using ARM64?

Solution:

  1. We are not providing the binaries for the client tools in arm64, you would need to build your custom docker image to already have the arm64 binaries in client tools, under the same path we expect. Then we would see they are already there and not overwrite them.
  2. To use arm64 would need to reverse engineer our docker image: create your own docker image that installs an arm64 JRE to run the Harness delegate jar and then pre-populate the client-tools directory with arm64 versions of the expected binary.s The following binary list can be used:
kubectl/v1.13.2/kubectl
go-template/v0.4/go-template
harness-pywinrm/v0.4-dev/harness-pywinrm
helm/v2.13.1/helm
helm/v3.1.2/helm
helm/v3.8.0/helm
chartmuseum/v0.12.0/chartmuseum
chartmuseum/v0.8.2/chartmuseum
tf-config-inspect/v1.0/terraform-config-inspect
tf-config-inspect/v1.1/terraform-config-inspect
oc/v4.2.16/oc
kustomize/v3.5.4/kustomize
kustomize/v4.0.0/kustomizen
scm/36d92fd8/scm

Note: We will be launching arm64 in couple of weeks

Issue:

The user wants to provision target infrastructure using terraform on Azure and deploy a sample app in it. The build pipeline executes but the deploy stage fails. delegate-capability-issue

Solution:

  1. Check if the delegate is ACTIVE and has enough resources assigned to the pod. You can check the pods state with the commands like:
kubectl get pods -n <<namespace>>
kubectl describe pods -n <<namespace>>
  1. For delegate capability issues, it depends on the specific user’s use case. Ex: if you want to do a terraform deployment, a few versions of terraform demand, terraform to be installed on the delegate pod. If you want to do a helm deployment using Helm V2, you will need to install Helm v2 and Tiller on the delegate pod.

  2. Please review our docs on supported integrations.

TEMPLATE

Issue:

Is there a way to do the equivalent of the helm template command to render the templates and display the output in the Harness?

Solution:

We run a Helm Template when we do a Helm Chart Manifest type with a Kubernetes Deployment Type, we don’t output that to a variable for a user to view, it can only be viewed in our execution logs. We don’t do this either for a Native Helm Deployment where we run a Helm Template and then perform the Helm install or helm upgrade and the output is only visible in the execution log.

helm-chart

We have seen our users fetch the chart in a shell script step and run the helm commands on the chart to see the output before Harness does a deployment.

Issue:

I am using Harness to spin a short-lived Kubernetes job. Is there any way to fetch the logs back to the Harness?

Solution:

You can write a shell script to fetch logs for you as an output and then you can export/download them as deployment logs.

Issue:

I was referring to this guide in Harness docs to learn about continuous delivery. On running the pipeline this error showed up. Does anyone know why this came and how to resolve it? API-calls

Solution:

Harness uses its own ConfigMap for every deployment to store the release history in a Kubernetes cluster. This ConfigMap can be used for Rollback if the deployment fails. Let’s say you are at your very first deployment(ConfigMap is yet to be created by Harness), now you want to make an API call to check if ConfigMap exists and say you get this error.

Invalid request: Failed to get ConfigMap. Code: 403, message:{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps \"release-abcdef\" is forbidden: User \"system:serviceaccount:sa:harness\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"ns\"","reason":"Forbidden","details":{"name":"release-abcdef","kind":"configmaps"},"code":403} 

It is clear from the above screenshot that API calls are failing due to permission. Check the permissions and try again and this doc should help.

Issue:

Hi, I am creating a custom service command in Harness that simply deploys a zip file to my host. I realized that if for some reason my artifact does not get deployed correctly and the zip file does not exist on my host, an error will NOT get thrown when the unzip command fails. I tried forcing Harness to throw an error by writing to stderr using.

if [-f "$ARTIFACT_FILE_NAME" ]; then
unzip "$ARTIFACT_FILE_NAME"
else
1>&2 echo "Error: cannot find artifact"

But Harness still treats this as an INFO message in the logs and does not fail the deployment. Any suggestions for how to fail a deployment through my bash script?

Solution: You can refer to this doc, make sure you set -e works as syntactically the '[-f' part of the script is going to cause a failure.

Issue: How to pull zip files from artifacts in the cd stage?

Solution: We don’t support that yet in Next Generation. We only support containerized Kubernetes deployments and native helm deployments in the platform. Please review our docs on supported integrations: Please review our docs on supported integrations.

SECRETS

SSH KEY

Issue:

Error while configuring the Linux server with ssh in Harness. ssh-server

Solution:

  1. The connection issue likely has something to do with the URL. For an AWS Linux box, it’s usually something like ec2-76-939-110-125.us-west-1.compute.amazonaws.com. For Azure, normally it would be something like ssh -i ~/.ssh/id_rsa azureuser@10.111.12.123 so in Harness try it without the https:// scheme.:
  2. The SSH key in your screenshot looks like it’s in NextGen. You can also use a Shell Script step in NextGen.
  3. In Harness CurrentGen you can deploy to any Linux VM using our SSH Deployment Type, you can also use Azure VMSS.
  4. You can deploy to a physical server.
  5. If you’re just looking to copy files as part of a 'workflow', you can use a Shell Script step.
  6. For artifact copy, as opposed to deployment, you can use the SSH Service and Copy Artifact Command.

COMMON ISSUES

Issue:

Hi, need some help with the following questions:

  1. In Harness cd while deploying helm chart [from an HTTP helm chart], can I upload custom values.yaml file which is at a different location?
  2. Installed a helm chart on an aks cluster using a Harness but tried to run the helm list locally in my terminal unable to see the release name in the cmd output?
  3. In Harness CD, can I give a custom release name while installing the helm chart in the pipeline?

Solution:

To answer your question:

  1. You can do that, refer to the section values.yaml
  2. By default the app is installed under the Harness namespace so you need to add -n harness to the Helm command. Try:
helm list -n harness
  1. Yes, we can do that custom-release-name

Issue:

Can Harness CD deploy a helm chart and support kustomize patch on top of the helm chart? Are helm charts deployed by Harness visible using the helm CLI on the targets?

Solution:

  1. We can fetch Helm charts for Kustomize Deployments. We can also apply Patches to those kustomize deployments. Harness Agent has its own Helm Client, we use that Helm client to query the deployed resources associated with the chart. With Kustomize and Helm charts, Harness wouldn’t deploy using Helm, we would deploy using the kustomize cli and we can apply patches. We would track the resources with the labels we apply to the Kubernetes objects we deploy.
  2. You could run a helm list -To list all the helm charts, that’s no problem. However, Harness will manage the state and take care of rollback for you vs the helm client and tiller. We already know what the previous resources are so in event of deployment failure we can roll back to the last known healthy state. We have an Argo CD Integration where we allow you to leverage your existing ArgoCD cluster and manage it with Harness and Integrate it natively with Harness CI.

Harness can use ArgoCD for GitOps

Harness can manage and orchestrate the Deployments out of the box without having ArgoCD cluster management. We manage the deployed resources on the cluster, we have a slew of integrations with Kubernetes, Helm, Kustomize, and Openshift, We give you Canary, Blue-Green, and Rolling Deployment Logic out of the box. We integrate with ArgoCD if you want that style of deployment with GitOps. Quick reference doc:

  1. Harness Argocd GitOps quickstart
  2. Use kustomize for Kubernetes deployments

Need further help?

Feel free to ask questions at community.harness.io or join community slack to chat with our engineers in product-specific channels like:

#continuous-integration Get support regarding the CI Module of Harness.

#continuous-delivery Get support regarding the CD Module of Harness.

· 6 min read
Dhrubajyoti Chakraborty

Introduction

This beginner guide aims to help learners learn the configuration management for run steps settings in Harness CI. We will learn about different settings and permissions for the Harness CI Run Tests step which executes one or more tests on a container image.

Before We Begin

Are you confused with terminologies like Container Resources, Image Pull Policy etc while creating & configuring a run step for your CI pipeline? In this article we will discuss a few such terminologies on the Harness CI platform and how you can configure them to set up your run step in the pipeline settings according to your requirements.

Configuration Parameters in a Run Step

  • Name
  • ID
  • Description
  • Container Registry
  • Image
  • Namespaces
  • Build Tool
  • Language
  • Packages
  • Run Only Selected Tests
  • Test Annotations
  • Pre-Command & Post-Command
  • Report Paths & Environment Variables
  • Output variables
  • Image Pull Policy
  • Container Resources

Name

The unique name for the run step. Each run step must have a unique name & it is recommended to use a name which describes the step.

ID

Most of the Harness entities and resources include a unique Id also referred as entity Identifier that's not modifiable once the entity is created. It provides a constant way to refer to an entity and avoid issues that can arise when a name is changed.

Initially Harness automatically generates an identifier for an entity which can be modified when you are creating the entity but not after the entity is saved.

Even if you rename the entity the Identifier remains the same. The automatically generated Identifier is based on the entity name and follows the identifier naming conventions. If an entity name cannot be used because it's already occupied by another entity Harness automatically adds a prefix in the form of -1, -2, etc.

Check out the documentation to know more about Entity Identifier Reference

Description

This is generally a text string describing the run step and it’s working.

Container Registry

Container Registry refers to the Harness Connector for a container registry. This is the container registry for the image Harness will use run build commands on such as DockerHub.

Check out the documentation to know more about Harness Container Image Registry

Image

Image is the name of the Docker image to use when running commands.For example: alpine-node This image name should include the tag and will by default refer to the latest tag if not specified. You can use any docker image from any docker registry including docker images from private registries.

Different container registries has different name formats:

  • Docker Registry: enter the name of the artifact you want to deploy, such as library/tomcat. Wildcards are not supported.
  • GCR: enter the name of the artifact you want to deploy. Images in repos need to reference a path starting with the project ID that the artifact is in, for example: us.gcr.io/playground-243019/quickstart-image:latest.
  • ECR: enter the name of the artifact you want to deploy. Images in repos need to reference a path, for example: 40000005317.dkr.ecr.us-east-1.amazonaws.com/todolist:0.2.

Namespaces (C#)

A comma-separated list of the Namespace prefixes that you want to test.

Build Tool

This is where you select the build automation tool & the source code language to build, such as Java or C#.

Packages

This are the list of source code package prefixes separated by a comma. For example: com.company., io.company.migrations

Run Only Selected Tests

If this option is unchecked, Test Intelligence is disabled and all tests will run.

Test Annotations

This is where you enter the list of test annotations used in unit testing separated by commas. Any method annotated with this will be treated as a test method. The defaults are: org.junit.Test, org.junit.jupiter.api.Test, org.testng.annotations.Test

Pre-Command & Post-Command

In pre-command you enter the commands for setting up the environment before running the tests. For example, printenv prints all or part of the environment.

In post-command you enter the commands used for cleaning up the environment after running the tests. For example, sleep 600 suspends the process for 600 seconds.

Report Paths

This refers to the path to the file(s) that store results in the JUnit XML format. You can enter multiple paths. Glob is supported.

Environment Variables & Output Variables

Environment variables refers to the variables passed to the container as environment variables and used in the Commands.

Output variables expose Environment Variables for use by other steps/stages of the Pipeline. You can reference the output variable of a step using the step ID and the name of the variable in Output Variables.

output-var

Image Pull Policy

This is where you make the choice to set the pull policy for the image.

  • Always: The kubelet queries the container image registry to resolve the name to an image digest every time the kubelet launches a container. If the kubelet encounters an exact digest cached locally, it uses its cached image; otherwise, the kubelet downloads (pulls) the image with the resolved digest, and uses that image to launch the container.

  • If Not Present: The image is pulled only if it isn't already present locally.

  • Never: The kubelet assumes that the image exists locally and doesn't try to pull the image.

Container Resources

The container resources configuration specifies the maximum resources used by the container at runtime.

Limit Memory Maximum memory that the container can use. You can express memory as a plain integer or as a fixed-point number using the suffixes G or M. You can also use the power-of-two equivalents Gi and Mi.

Limit CPU The maximum number of cores that the container can use. CPU limits are measured in cpu units. Fractional requests are allowed: you can specify one hundred millicpu as 0.1 or 100m.

See Resource units in Kubernetes

note

This is not applicable in case you have opted for Hosted by Harness in your Infrastructure settings of the step.

Timeout

This specifies the timeframe until which the step shall execute. Once the timeout is reached the step fails and the Pipeline execution continues.

NOT ABLE TO TROUBLESHOOT THE ENCOUNTERED ERROR

In case the user is unable to troubleshoot the application error or pipeline execution failures the user can log/submit a ticket to Harness Support. To log a ticket follow the process:

  1. Click the Help button in the Harness Manager
  2. Click Submit a Ticket or Send Screenshot
  3. Fill out the pop up form and click Submit Ticket or Send Feedback

· 7 min read
Dhrubajyoti Chakraborty

Introduction

This is a guide to get started with the common use cases for the end users to implement pipeline executions successfully in Harness CI

Harness provides various sources & tools to easily troubleshoot frequently encountered errors to fix pipeline failures. This guide lists some of the common issues faced while implementing and designing pipelines in Harness CI and the possible solutions.

What’ll we be covering here?

  • Syntax Verification
  • Variable Verification
  • Troubleshooting Delegate Installation Errors
  • Troubleshooting Triggers Errors
  • Troubleshooting Git Experience Errors

Verify Syntax

An early-stage error use case is generally incorrect syntax. In such cases, the pipeline returns an invalid YAML syntax message and does not start running in case any syntax error is detected.

Edit pipeline.yaml in the pipeline studio

The pipeline editor in the YAML view is recommended for editing experience (rather than the graphical stage view). Major features in the editor include:

  • Creation of connectors, secrets & pipelines from scratch
  • Realtime schema validation
  • Intellisense & auto-completion
  • Field descriptions & rich inline documentation
  • Free Templates for YAML Samples

This feature helps the developer to validate the existing pipeline’s correctness and helps in quick modification alongside copying of a pipeline as a code.

Verify Variables

A very integral part of troubleshooting errors in Harness CI is to verify the variables present in the pipeline and their values. Major configuration in the pipeline depends on the variables and verifying them becomes the easiest way to reach the root cause and potential solution of the problem.

Visit the variables section on the Harness CI platform. Check if the expected variables and their values match and are implemented at the expected stage for the pipeline.

Delegate Setup Failure

The majority of the encountered errors in Harness CI revolve around delegate setup processes. Make sure you have a complete understanding of how to set up a harness delegate from scratch & understand how the Harness manager and delegate complement each other.

Delegate setup also fails if the SSH key used for deployment to the targeted host is incorrect. This usually happens due to incorrect information about the SSH configuration in the Harness Secrets Management and also if the targeted host is not configured to support SSH connections.

For troubleshooting move to the watcher.log file that provides information about the deleted version.

Delegate fails to establish a connection with the Harness Manager

In case of connection failures for the delegate with the Harness Manager, we can use ping on the delegate host to test the response time for app.harness.io and other URLs are consistent or not. We can use the traceroute to check the network route and verify if there is any case of redirection. To verify if the DNS resolution is working fine we can implement nslookup. We can flush the client's DNS cache (Check for the OS). We can run tests to check for local network errors or NAT license limits. In the case of cloud service providers, we’ll have to ensure that the security groups have outbound traffic allowed on HTTP 443.

No eligible delegate found for the assigned pipeline execution

This error is encountered when the delegate fails to achieve the URL criteria for validation. All delegates in harness are identified by their Harness account ID with some additional factors. For example, in VMs the delegates are identified with the combination of their hostname and IP address thus in case the IP changes the Harness Manager fails to identify the delegate. In the case of K8s and ECS delegates, the IP changes when the pod is rescheduled.

The delegate sends the heartbeat, deployment data & Time series, and log data for continuous verification to the Harness Manager. The credentials used by the Delegate must have the roles and permissions required to perform the task. For example, if the account used for an AWS Cloud Provider does not have the roles required for ECS deployments then it will fail.

For more information visit How does Harness Manager Identify Delegates?

K8s Delegate Deletion Failure

To delete the Harness Delegate from the K8s cluster we’ll have to delete the StateFulSet for the delegate. This ensures that the expected number of pods is running and available. Deletion of the delegate without deletion of the StateFulSet results in a recreation of the pod.

For example, if the delegate name is delegate-sample then we can delete StateFulSet with the command below

$ kubectl delete statefulset -n harness-delegate delegate-sample

Triggers Rejection Failures

This usually happens when the user uses a webhook trigger to execute a pipeline or workflow and the name of the artifact in the cURL command is different from the name of the artifact.

Trigger Rejected. Reason: Artifacts Are Missing for Service Name(S)

This is majorly a result of a bad name for an artifact build version when placed in the cURL command. For example a cURL with build number v1.0.4-RC8:

curl -X POST -H 'content-type: application/json'
--url https://app.harness.io/gateway/api/webhooks/... . .
-d '{"application":"tavXGH . . z7POg","artifacts":[
{"service":"app","buildNumber":"v1.0.4-RC8"}]}'

In case the Harness Service artifacts have a different nomenclature the cURL command will fail to execute. Thus ensuring the webhook cURL command has the correct artifact name becomes very important.

Failure when executed Git Push in Harness

In case of two-way sync between the Git repository and the Harness Application, the push to Harness will result in failure unless the GIT YAML files and the required settings are configured before pushing the app to Harness.

For example, in case we have a predefined infrastructure definition and the required labels or parameters are not filled or filled in incorrectly the push to git is more likely to encounter a failure.

Using the Harness Manager to configure the app at first is the best way to encounter this error. This generally ensures that all the required settings are configured correctly and synced with the git repository.

Triggers: zsh: no matches found

In some of the OS versions specifically in MACOS, the default shell is zsh. The zsh shell requires the cURL command to not use the “?” character or put quotes around the URL.

For example;

curl -X POST -H 'content-type: application/json' --url https://app.harness.io/gateway/api/webhooks/xxx?accountId=xxx -d '{"application":"fCLnFhwsTryU-HEdKDVZ1g","parameters":{"Environment":"K8sv2","test":"foo"}}'

This shall work

curl -X POST -H 'content-type: application/json' --url "https://app.harness.io/gateway/api/webhooks/xxx?accountId=xxx -d '{"application":"fCLnFhwsTryU-HEdKDVZ1g","parameters":{"Environment":"K8sv2","test":"foo"}}'"

User does not have "Deployment: execute" permission

The error User does not have "Deployment: execute" permission reflects back to the user’s Application Permission > Settings does not involve execute. This can be solved by resolving the application permission configuration. The user can easily modify the Harness Configure as Code YAML files for the harness application.

To enable editing of the YAML file make sure the user’s Harness User Groups must have the account permission Manage Applications enabled. Also the Application Permissions Update enabled for specific applications

NOT ABLE TO TROUBLESHOOT THE ENCOUNTERED ERROR

In case the user is unable to troubleshoot the application error or pipeline execution failures the user can log/submit a ticket to Harness Support. To log a ticket follow the process:

  1. Click the Help button in the Harness Manager
  2. Click Submit a Ticket or Send Screenshot
  3. Fill out the pop up form and click Submit Ticket or Send Feedback