Skip to main content

14 posts tagged with "cme"

View All Tags

· 6 min read
Krishika Singh

In a recent survey, we saw that developers spend 30-40% of their time writing codes, now the question arises of where the rest of 70-60% of the time goes. It is spent managing features, fixing deployment issues, and reporting to various stakeholders when things go wrong.

There is a common solution to all these use cases i.e Feature Flags.

Feature Flags are nothing but conditional statements (think of them as an if-else statement), which is what makes them so adaptable in the ways that they solve many use cases that we are going to discuss further in the blog.

With this, I am going to discuss a few use cases in this blog and will be seeing how a particular use case is without FF and then see how it can be done with Harness Feature Flags.

1. User Targeting

User Targeting allows only subsets of user access to a particular feature. This could be a single customer who has requested a new feature or even it can be used for things like only paying customers will have access to a particular feature.

Without Feature Flags

In order to target users, devs have to stand up some way to recognize a user based on attributes and then build a mechanism to only show them that feature. This can be done using a combination of runtime environment variables and backend database entries. Usually, these requests come from the customer-facing team and sales. For developers to field these request means taking time out to properly understand who target users are, and what attributes define them and then testing the changes performed in the lower level environment to make sure nothing unusual happens and then pushing the code live and sometimes the app or service needs to be restarted again for these changes to take effect.

With Harness Feature Flags

Harness Feature Flag provides the basic user targeting capabilities that product development teams are looking for and additionally lets you automate the process with progressive delivery: roll out to small user groups, verify behavior, manage changes, expand user groups and repeat until you reach 100%.

When you create a Feature Flag in Harness, you will be asked to select the type of flag you want to use, and then you will be prompted to define user attributes that will determine whether the feature will be available to them or not. This targeting can be done in the code via the developer or in the UI by a non-developer and the changes will sync instantly.

2. Testing in Production

One cannot validate a feature unless and until it's live in front of real-time users - something you can not replicate in pre-prod. Without a technique to test in production with real-time users, you might take 3-4x as long to release the feature to the customers.

Without Feature Flag

Without Feature Flags, testing a new feature is the same as deploying a completed feature to production. That is super stressful, especially if you are making a major change like overhauling the UI or migrating to a new database. I am sure you don't want a new ticket telling you that your implementation was wrong and that you need to redo it. You just don't know what is going to actually happen in production and that is a tough reality.

With Harness Feature Flag

How you want to test in production with Harness depends on what kind of feature you are planning to test. There are some common things Harness helps with as you instrument and run your tests.

  • First, you have flag evaluation metrics which will be helpful to see how often a flag is being evaluated, by which sets of target groups, who are making the changes internally, and what changes are being made.

  • Second, If you know you want to test the feature, you can automate the setup of these tests using a pipeline that sequentially setups and runs for a defined amount of time. You can say, you want to test a feature with 10% of users for 2-3 days after getting approval from you, and then roll it out to different 25% of users with slight variation and repeat this until the test is completed.

3. Trunk-Based Deployment

In trunk-based development, each developer divides their work into small batches and merges that into the trunk at least once or several times a day.

Without Feature Flag

Suppose a feature that is being developed takes more than one sprint. There are two options:-

  1. Merge the unfinished code to master making sure nothing breaks.
  2. Let the branch be there for a long and risk the code base changing significantly by the time PR is raised and deal with merge conflicts.

Both of the above two scenarios aren't ideal because they take a significant amount of devs' time for something that isn't core to the job and also a lot of risks are involved if something breaks in the productions side of the house then a lot of rework is needed to once again separate the code.

With Harness Feature Flag

Once a new code is wrapped up inside a feature flag it enters a safe one and won't cause disruptions in the main application and it simplifies both scenarios discussed above.

At its core, Harness Feature Flags by virtue of being used enables trunk-based development. Harness makes it easy by ensuring all releases meet governance standards no matter how small or large is the change. Also, you can set up your own governance guardrails as part of your feature release pipeline to ensure every time the feature is merged into the main trunk and set for release it can be run against a release pipeline to make sure it meets standards and reduce the risk associated. And for the code sitting around for a while, Harness also calls out stale flags so that they can be removed from the system altogether.

Conclusion

Feature Flags for sure is a powerful addition to agile development and give teams more control of their codebase and also end-to-end experience.

So, what you are waiting for visit Harness and try out Harness Feature flags.

You can also visit Harness docs to learn more about Feature Flags as well as learn how you can use them in your applications.

· 3 min read
Krishika Singh

What are Feature Toggles/Feature Flags?

Feature Toggles also called Feature Flags are a powerful technique, that allows teams to modify system behavior without changing the code. It is a set of patterns that can help deliver new functionality to users rapidly but safely. They are great for better control and experimentation of features.

Why Should we use Feature Flags?

  • One should use feature flags as part of their regular development process. Any time you release a feature you should wrap it in a feature flag to test it, control who has access to it and gradually roll it out to the users.

  • One of the biggest challenges the product development team faces is delivering and controlling new features. With continuous delivery and feature flag management, a team can launch and control its features.

  • In Feature flags, you can run canary testing, a technique that allows a team to test a new feature on a subgroup of new users and see how it is performing before rolling the feature out to a wider audience.

  • Feature flags allow you to instantly toggle between different versions of your product. It allows modifying your system behavior without making disruptive code changes.

What are the best practices to adopt while using Feature Flags?

  • Using standardization naming scheme:- People in the organization could start naming the flags with the same name and gettings flags mixed up which could result in activating the wrong flag leading to potential disruptions in your system.

  • Control access to flags:- Set up logging to know which change was made by who, which is important for reducing dependency between the product and engineering team and there will be better productivity a there will be transparency when it comes to making new changes.

  • Conduct regular cleanups for your flags: You need to make sure you are removing flags every now and then which are no longer in use.

Why use Harness Feature Flags?

Harness Feature Flags is made specifically for developers and designed to be fast and easy to use.

Some salient features are:-

  1. Simple UI-Based feature release workflows:- Users have the ability to create templates that they can standardize across feature flags that have the exact operational needs.

  2. Governance and Verification:- Users can ensure production pushes always meet defined organizational standards, and that they can minimize the negative impact of any issues in prod. In addition, users can also automate service verification once a feature is out, ensuring if an issue occurs, the feature is turned off to minimize impact.

  3. Integration into CI/CD/2+2=5:- By presenting the feature flag as a pipeline, feature flag management becomes a natural step in the everyday workflow of development teams and is integrated into CI/CD as a unified pipeline.

  4. Management and Governance: Letting the team build rules and processes and automating cleanups and flag lifecycle management. Ensuring teams can keep their systems secure, compliant, and standardized wherever possible is critical to their goals.

· 3 min read
Debabrata Panigrahi

Introduction

In this tutorial, we will go through a step-by-step example, on how to use the Harness CI for Maven testing.

Pre-Requisite:

Docker Connector to fetch public Docker images of maven chrome jdk8

Let’s now deepdive into the step-by-step tutorial, wherein we will now use the Harness SaaS platform to set-up the maven testing:

Step-1:

Start with the build module, and give it a name test, make sure to keep the clone codebase slider “off” as it is not required in this example.

Step-2:

Now let’s move to the next part of the pipeline, which is to select the infrastructure and select Harness Hosted Builds under the same.

Step-3:

Moving on to the execution step, let’s add a Run Step and name it as “testrun” and under the container registry add the already created Docker Connector, and under the image add the “rvancea/maven-chrome-jdk8”. Now let’s add a shell command to run mvn clean compile -DsuiteXmlFile=Batch1 test and apply the changes.

testrun

Step-4:

Now, let’s add another Run Step similar to the above and name it as reports, just here contrary to the above step the command changes to find . -name "*.xml".

filesgen

Step-5:

It’s time to add the failure strategy now as a Run Step, with the following command.

actualFailedTestsStatus=<+execution.steps.testrun.status>
echo $actualFailedTestsStatus
if [ "$actualFailedTestsStatus" = "IGNORE_FAILED" ]
then
echo "tests have failed"
exit 1
else
echo "Failure reruns have passed"
exit 0
fi

faliurestrat

Step-6:

Now, since the pipeline is complete, let’s save and run the same and the results looks like the following on the console logs.

result

reports

Once the run is successful, the above list of files are generated and can be further stored and processed as the test reports.

For, further reference following is the pipeline yaml of the above mentioned example

pipeline:
name: yaml
identifier: yaml
projectIdentifier: HarnessDemo1
orgIdentifier: default
tags: {}
stages:
- stage:
name: test
identifier: test
type: CI
spec:
cloneCodebase: true
infrastructure:
type: KubernetesHosted
spec:
identifier: k8s-hosted-infra
execution:
steps:
- step:
type: Run
name: testrun
identifier: testrun
spec:
connectorRef: account.harnessImage
image: rvancea/maven-chrome-jdk8
shell: Sh
command: |+
mvn clean compile -DsuiteXmlFile=Batch1 test

privileged: false
reports:
type: JUnit
spec:
paths:
- target/surefire-reports/junitreports/*.xml
failureStrategies:
- onFailure:
errors:
- AllErrors
action:
type: Ignore
- step:
type: Run
name: reports
identifier: failstrat
spec:
connectorRef: account.harnessImage
image: rvancea/maven-chrome-jdk8
shell: Sh
command: find . -name "*.xml"
when:
stageStatus: All
failureStrategies: []
- step:
type: Run
name: failstrategy
identifier: step3
spec:
connectorRef: account.harnessImage
image: rvancea/maven-chrome-jdk8
shell: Sh
command: |-
actualFailedTestsStatus=<+execution.steps.testrun.status>
echo $actualFailedTestsStatus
if [ "$actualFailedTestsStatus" = "IGNORE_FAILED" ]
then
echo "tests have failed"
exit 1
else
echo "Failure reruns have passed"
exit 0
fi
when:
stageStatus: All
failureStrategies: []
properties:
ci:
codebase:
connectorRef: harnessRud
build: <+input>

What’s Next?

The above pipeline and use case was the requirement of one our community user and was built according to their requirements by the community engineering team so, feel free to ask questions at community.harness.io or join community slack to chat with our engineers in product-specific channels like:

· 6 min read
Krishika Singh

In this blog ,we are going to talk about how easily you can set up your pipeline using YAML.

Harness includes visual and YAML editors for creating and editing Pipelines, Triggers, Connectors, and other entities. Everything you can do in the visual editor you can also do in YAML.

For detailed information about using Harness YAML visit Harness YAML Reference and Harness YAML Quickstart.

Before we begin

Make sure you have the following set up before you begin this tutorial:

  • GitHub Account: This tutorial clones a codebase from a Github repo. You will need a GitHub account so Harness can connect to GitHub.
  • Docker Hub account and repo: You will need to push and pull the image you build to Docker Hub. You can use any repo you want, or create a new one for this tutorial.

Getting Started

  • Fork the repository

    For this demo, we are using Python-pipeline-samples.

  • Login into Harness UI

    • Go to Harness.

    • Sign up for the Harness platform.

    • Once you signup you will enter the Harness UI as shown below.

    • Go to Builds and select Create a Project.

      • Give the name of the Project -> `Save and Continue
      • You can also invite collaborators, it's optional.
    • After Save and Continue select the module as `Continous Integration.

      After selecting the module as Continous Integration you will see the screen as shown in the below screenshot.

    • Select Create a Pipeline.

      • Name your Pipeline.
      • Choose the setup as Inline.
      • Select Start. Refer to the below screenshot:

Getting Started

  • After the Creation of the Pipeline, you will enter the pipeline studio as shown below
  • As you can see in the pipeline studio we have two options, one is VISUAL and the other is YAML. Navigate to the YAML editor, as shown below.
  • Copy and Paste the below YAML file into the editor.

Note:- Paste the below YAML file just below the tags{}.

  properties:
ci:
codebase:
connectorRef: <+input>
build: <+input>
depth: <+input>
prCloneStrategy: <+input>
stages:
- stage:
name: build test and run
identifier: build_test_and_run
type: CI
spec:
cloneCodebase: true
infrastructure:
type: KubernetesHosted
spec:
identifier: k8s-hosted-infra
execution:
steps:
- step:
type: Run
name: Code compile
identifier: Code_compile
spec:
connectorRef: <+input>
image: python:3.10.6-alpine
shell: Sh
command: python -m compileall ./
- step:
type: Run
name: Create dockerfile
identifier: Create_dockerfile
spec:
connectorRef: <+input>
image: alpine
shell: Sh
command: |-
touch pythondockerfile
cat > pythondockerfile <<- EOM
FROM python:3.10.6-alpine
WORKDIR /python-pipeline-sample
ADD . /python-pipeline-sample
RUN pip install -r requirements.txt
CMD ["python3" , "./app.py"]
EOM
cat python-docker file
- step:
type: BuildAndPushDockerRegistry
name: Build and Push an image to the docker registry
identifier: Build_and_Push_an_image_to_docker_registry
spec:
connectorRef: <+input>
repo: <+input>
tags:
- latest
dockerfile: pythondockerfile
optimize: true
variables:
- name: container
type: String
description: ""
value: docker
- stage:
name: Integration test
identifier: Integration_test
type: CI
spec:
cloneCodebase: true
infrastructure:
useFromStage: build_test_and_run
execution:
steps:
- step:
type: Background
name: "python server "
identifier: python_server
spec:
connectorRef: <+input>
image: <+input>
shell: Sh
command: python3 ./app.py
- step:
type: Run
name: "test connection to server "
identifier: test_connection_to_server
spec:
connectorRef: <+input>
image: curlimages/curl:7.73.0
shell: Sh
command: |-
sleep 10
curl localhost:5000

  • Click on Save.

    Navigate to VISUAL and now you can see your two-stage pipeline ready as shown below in the screenshot. That's the beauty of YAML in Harness.

    You can navigate through all the steps in the pipeline and explore the pipeline.

Inputs

Before running the pipeline, let's create a GitHub and Docker connector.

  • GitHub Connector

    Under Project setup select Connectors.

    Click on + New Connector

    Select Code Repositories and Choose Github.

    You can refer to the below screenshot.

    Change the Connector settings as follows:

    1. Overview

      Name: python-sample-connector

      Select Continue.

    2. Details

      URL Type: Repository

      Connection Type: HTTP

      GitHub Repository URL: Paste the link of your forked repository

      Select Continue.

    3. Credentials

      Username: (Your Github Username)

      Personal Access Token: Check out how to create personal access token

      Secret Name: Git-Token

      Secret Value: PAT value generated in Github

      Select Enable API access (recommended)

      Under API Authentication-> Personal Acess Token select the name of the secret created in the previous step.

      Select Continue.

    4. Select Connectivity Mode

      Under Connect to the provider-> Select `Connect through Harness Platform.

      Select `Save and Continue.

    5. Connection Test

      You will see Verification Successful which means your connector is connected successfully to your codebase.

      For reference, you can also check out this video on our Harness Community youtube channel

      To develop more understanding of Connectors [check out the docs here](https://docs.harness.io/category/o1zhrfo8n5-connectors)
  • Create a Docker Connector

    Under Project setup select Connectors.

    Click on + New Connector

    Select Artifacts Repositories and Choose `Docker Registry.

    You can refer to the screenshot below

    Change the settings as follows

    1. Overview Name- docker quickstart

    2. Details

      • Docker registry URL - https://index.docker.io/v1/
      • Provider type - Docker Hub
      • Authentication - `Username and Password
      • Username - Docker hub username
      • Secret Token - Check out how to create docker PAT
    3. Select Connectivity Mode

        Under `Connect to the provider`-> Select `Connect through Harness Platform.

      Select `Save and Continue.

      For your reference you can also check out this video on our Harness Community YouTube channel:

  • Create a Docker Repository

    1. Log in to Docker Hub
    2. Go to Repositories -> Select Create Repositories.
    3. Give a name to your repository and you can choose whether you want you repo to be public or repo.

Run the Pipeline

Navigate back to the Pipeline studio and click on Run.

On Clicking, you will see a page asking for inputs so as to run the pipeline, you can refer to the below screenshot

  1. CI Codebase

    • Connector- Select the Github Connector you created in the previous step.
  2. Stage: build test and run

    Step: Code compile

    • Container Registry- Select the Docker Connector you created in the previous step.

    Step: Create dockerfile

    • Container Registry- Select the Docker Connector.

    Step: Build and Push an image to Docker Registry

    • Docker Connector- Select the Docker Connector.
    • Docker Repository- docker-hub-username/repository-name
  3. Stage: Integration Test

    Execution

    Step: python server

    • Container registry- Select the Docker Connector.
    • Image- docker-hub-username/repository-name

    Step: test connection to the server

    • Container registry- Select the Docker Connector.

Click on Run Pipeline.

It will take around less than 3 mins to execute your Pipeline.

After successful completion and execution of all the steps you will see something similar to this:

This article explained YAML based onboarding process, if you want to try out Harness UI based onboarding do check out this tutorial:-

· 5 min read
Ritik Kapoor

Billions of dollars are impacted through clouds annually and with experience, organisations are able to observe the bigger picture where cloud costs outweigh the benefits. Harness Cloud Cost Management, an active cloud cost solution can save up to 75% of your cloud spend.

In this blog, we are going to discuss cloud cost solutions which can cut down manual overheads and automate cloud savings when you’re busy writing code.

Getting Started

Now, before we take our discussion to active cloud cost solutions with Harness CCM, let’s get an overview of passive and active cloud cost solutions.

Harness CCM allows an organisation to understand and manage its cloud cost through visibility and automation. On this basis, cloud cost management can be broadly categorised into the following -

  • Passive Cloud Cost Management - Cloud management through such techniques includes visibility into cloud cost and forecast based on usage history. This includes calculating total cost and providing deep root knowledge about one’s cloud spending.
  • Active Cloud Cost Management - An instance, where passive solution alerts about hiking costs, an active cloud cost solution is capable of making intelligent and automated decisions. This can help in scaling down resources and cutting down costs due to idle or over-provisioned resources.

The latter can be achieved through Harness Next Geneneration CCM.

Harness First Gen CCM

Harness First Generation of CCM, provides visibility into your cloud resource at an hourly granularity. It’s a “tag-less” solution i.e you don’t need tagging to get relevant information at any level. It is designed to provide relevant insights into cloud cost which can assist in maintenance and analysis. It has features such as -

  • Anomaly Detection - any sudden change in cloud spending could be notified through slack or email
  • Forecasting - Based on cloud usage history a prediction model tracks and forecast your cloud cost at required intervals.
  • Budgeting - This feature allows you to set budgets at different levels of your organisation and provide you email alerts when spending exceeds the set amount

Harness First Generation is capable of doing root cost analysis if tied up with Harness's continuous delivery service.

Harness Next Gen CCM

Harness Next Gen, provides the visibility of First Gen and is capable of regulating cloud costs through intelligent automation. This makes Harness Next Gen an active cloud cost management solution.

Harness Next Gen enables an organisation to maintain and regulate cloud cost through custom rules which allows scaling down of idle or over-provisioned resources.

Features of Harness Next Generation CCM

Harness Next Gen CCM is an active cloud cost solution with all the features of first-gen and more. Let’s take a look at the key features of Harness Next Gen CCM -

Inventory Management

Harness Inventory Management is achieved through AWS EC2 Inventory Cost Dashboard. It provides granular insights into AWS EC2 instances. It can track various cloud cost indicators across different zones and time ranges. Through visuals, you can understand your cloud cost trends and make decisions based on data and analysis.

You can adjust sliders and groups on the dashboard to filter data as per requirement. For example, set up a date range and select multiple states from running, stopped, terminated etc. This makes the dashboard dynamic and curate graphs and charts as per your requirement.

inventory_ss.png

Auto Stopping Rules

Cloud AutoStopping solves the problems of idle cloud wastage and automates cost savings. These rules can now be setup for your non-production workloads. Auto Stopping shut down compute resources that are idle for a set duration. So, it automatically detects this idle time and shuts down on-demand resources.

You can also run non-production workloads on spot instances to save cost up to 90%. These spot instance are dynamically orchestrated for the same infrastructure. As a result, switch between spot instances are without any interruptions. Spot instances are terminated when not in use and are automatically started when there is traffic or usage requests.

auto-ss.png

Perspectives and Budgets

Perspectives** are the best way to view the correlation of cost data across clouds, clusters, and labels. Perspectives organise multiple resources allocated to a team through multiple cloud providers in one place. Suppose a QA team has access to GCP, and Azure for running tests and quality checks. A QA Perspective would bundle up these resources in a single place where they could be analysed, maintained and regulated together. So, instead of managing cloud costs based on providers, perspectives enable tracking at a team level.

ss_perspectives.png

Budgets** are used to assist perspectives through control checks/limits attached to a team’s allocated cloud resource. Alerts could be sent when the cost exceeds (or is forecasted to exceed) the set budget.

All budgets created in Harness First Generation CMM will be available automatically in Next Generation CCM.

ss_budgets.png

Enhanced Business Intelligence Dashboards

Harness Dashboards helps in measuring software delivery performance. These BI Dashboards are powered by Looker. You may use a built-in dashboard or create a custom dashboard as per requirement. These dashboards are available for data across all the modules. Harness gives you the ability to create your own dashboard to access the key metrics that drives software delivery outcomes.

These Dashboards comes with robust reporting ability, you can set alerts based on preset metrics and schedule reports.

At Harness, we have been working on dashboards that allow you to identify bottlenecks, inform operations, and help drive business decisions. The information and actual metrics may vary based on how engineering teams develop their product.

dashboard ss.jpeg

Conclusion

This blog explained how Harness CCM is an active cloud cost solution, and how it’s at sentry duty with rising cloud cost. To know more about your cloud spending and set up your own cloud cost petrol unit, visit harness.io.

In case you are stuck somewhere, need some assistance, or want to talk about cloud cost in general, join #cloud-cost-management channel in Harness Community Slack.

· 11 min read

INTRODUCTION

This guide helps you to deal with common issues and recommended solutions right from the pipeline creation to its execution. You will find them categorized into different sections. We will bring in more troubleshooting tips in our upcoming guide series.

SERVICE

Issue:

Can we use the same service name for dev, QA, and prod so that when I choose the same service, it will automatically take the same infra and execution mode, or do we need to mention all those in the cd deploy stage?

Solution:

We can use the same service to deploy in all environments, provided your infrastructure is templated/ parametrized and saved as input sets. While deploying the service, you can provide it as runtime input.

CONNECTORS

Issue:

When a user is unable to create a git connector, not sure what URL to pick don’t know where to pick(not sure what URL has to be added under 'GitHub Repository URL'). github-connector

Solution:

Pattern URL looks like this: https://github.com/<account>/<repo>

Issue:

What happens if a user selects a Kubernetes cluster that has an INACTIVE delegate? k8-cluster-connector

Solution:

delegate-setup As per the screenshot above, make sure you verify the delegate if it is CONNECTED and you select the one that is active with ‘heartbeat’.

Issue:

remote-branch

Solution:

Check the git connector. Make sure the branch mentioned is available on the remote system.

Issue:

Hi, I provisioned a Helm Connector as the Connector Artifact Server. I would like to get the details of the connector via Harness SDK. I am aware of the GitHub repo, however, I cannot find the exact function I need to use, etc. Kindly help.

Solution:

If you want to get Harness configurations or automate the creation of resources you can use the REST APIs.

Issue:

Example for a connector in Harness SDK

Solution:

Currently, the SDK doesn't have native support for connector resources but it can still be fetched using the config-as-code APIs. If you know the path to the connector you can do something like this. If you already have a connector created you can find the path by going to Setup -> Config As Code (located in the top right-hand corner). From here you'll be able to see the path to where the connector's YAML configuration. For example, the path would be something like: Setup/Artifact Servers/Harness Docker Hub.yaml. I have attached a screenshot of it to look at.

Using the method I linked to, you'll be able to get back this YAML. There's not yet a native object in the SDK that you can easily parse this into, but you can create one yourself and deserialize it. Let me know if that helps in any way. (Screenshot attached for reference) Harness-sdk

MANIFEST FILES

Issue:

What happens if a user is trying to make a deployment using helm chart inputs wrong chart version? manifest-error

Solution:

Check the chart version while adding them to the manifest step in the pipeline stage. manifest-solution

ARTIFACTS

Issue:

When I try to run the pipeline, if I select the “tags” dropdown to add a tag to the execution, I get the following error: Stage Deploy_Dev: Please make sure that your delegates are connected. Refer [docs](https://ngdocs.harness.io/article/re8kk0ex4k) for more information on delegate Installation.

Solution:

If you are using docker to pick your artifacts, then make sure you are defining the right path. Right URL: https://registry.hub.docker.com/. You can define tags by providing inputs at runtime or define them in your YAML file ex: tag: “latest”.

ENVIRONMENT

Issue:

Multiple options are available for the user to set the SERVICE

  1. If the user wants to set a fixed value:

Solution:

edit-service

  1. If the user wants to pass it at runtime:

Solution:

pass-runtime

DEPLOYMENT STRATEGY

Issue:

What if the user selects a step irrelevant to the deployment strategy? deployment-strategy

Solution:

Make sure you select this step based on the deployment strategy selected. Say if you select a canary deployment, you have a canary delete step to delete the workloads and for other deployments, we have a delete step for cleaning.

Possible errors at the time of deployment

Issue:

What happens if the user inputs an invalid timeout for a step in a pipeline stage?

Solution:

Follow the tooltip tooltip

Issue:

When you enable editing mode and then try to navigate from YAML edit-mode

Solution:

Make sure you complete all the steps in the stage. Incomplete Yaml will not be allowed to navigate or move further.

DELEGATE

Issue:

Does the Harness delegate running in Kubernetes support node architecture using ARM64?

Solution:

  1. We are not providing the binaries for the client tools in arm64, you would need to build your custom docker image to already have the arm64 binaries in client tools, under the same path we expect. Then we would see they are already there and not overwrite them.
  2. To use arm64 would need to reverse engineer our docker image: create your own docker image that installs an arm64 JRE to run the Harness delegate jar and then pre-populate the client-tools directory with arm64 versions of the expected binary.s The following binary list can be used:
kubectl/v1.13.2/kubectl
go-template/v0.4/go-template
harness-pywinrm/v0.4-dev/harness-pywinrm
helm/v2.13.1/helm
helm/v3.1.2/helm
helm/v3.8.0/helm
chartmuseum/v0.12.0/chartmuseum
chartmuseum/v0.8.2/chartmuseum
tf-config-inspect/v1.0/terraform-config-inspect
tf-config-inspect/v1.1/terraform-config-inspect
oc/v4.2.16/oc
kustomize/v3.5.4/kustomize
kustomize/v4.0.0/kustomizen
scm/36d92fd8/scm

Note: We will be launching arm64 in couple of weeks

Issue:

The user wants to provision target infrastructure using terraform on Azure and deploy a sample app in it. The build pipeline executes but the deploy stage fails. delegate-capability-issue

Solution:

  1. Check if the delegate is ACTIVE and has enough resources assigned to the pod. You can check the pods state with the commands like:
kubectl get pods -n <<namespace>>
kubectl describe pods -n <<namespace>>
  1. For delegate capability issues, it depends on the specific user’s use case. Ex: if you want to do a terraform deployment, a few versions of terraform demand, terraform to be installed on the delegate pod. If you want to do a helm deployment using Helm V2, you will need to install Helm v2 and Tiller on the delegate pod.

  2. Please review our docs on supported integrations.

TEMPLATE

Issue:

Is there a way to do the equivalent of the helm template command to render the templates and display the output in the Harness?

Solution:

We run a Helm Template when we do a Helm Chart Manifest type with a Kubernetes Deployment Type, we don’t output that to a variable for a user to view, it can only be viewed in our execution logs. We don’t do this either for a Native Helm Deployment where we run a Helm Template and then perform the Helm install or helm upgrade and the output is only visible in the execution log.

helm-chart

We have seen our users fetch the chart in a shell script step and run the helm commands on the chart to see the output before Harness does a deployment.

Issue:

I am using Harness to spin a short-lived Kubernetes job. Is there any way to fetch the logs back to the Harness?

Solution:

You can write a shell script to fetch logs for you as an output and then you can export/download them as deployment logs.

Issue:

I was referring to this guide in Harness docs to learn about continuous delivery. On running the pipeline this error showed up. Does anyone know why this came and how to resolve it? API-calls

Solution:

Harness uses its own ConfigMap for every deployment to store the release history in a Kubernetes cluster. This ConfigMap can be used for Rollback if the deployment fails. Let’s say you are at your very first deployment(ConfigMap is yet to be created by Harness), now you want to make an API call to check if ConfigMap exists and say you get this error.

Invalid request: Failed to get ConfigMap. Code: 403, message:{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps \"release-abcdef\" is forbidden: User \"system:serviceaccount:sa:harness\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"ns\"","reason":"Forbidden","details":{"name":"release-abcdef","kind":"configmaps"},"code":403} 

It is clear from the above screenshot that API calls are failing due to permission. Check the permissions and try again and this doc should help.

Issue:

Hi, I am creating a custom service command in Harness that simply deploys a zip file to my host. I realized that if for some reason my artifact does not get deployed correctly and the zip file does not exist on my host, an error will NOT get thrown when the unzip command fails. I tried forcing Harness to throw an error by writing to stderr using.

if [-f "$ARTIFACT_FILE_NAME" ]; then
unzip "$ARTIFACT_FILE_NAME"
else
1>&2 echo "Error: cannot find artifact"

But Harness still treats this as an INFO message in the logs and does not fail the deployment. Any suggestions for how to fail a deployment through my bash script?

Solution: You can refer to this doc, make sure you set -e works as syntactically the '[-f' part of the script is going to cause a failure.

Issue: How to pull zip files from artifacts in the cd stage?

Solution: We don’t support that yet in Next Generation. We only support containerized Kubernetes deployments and native helm deployments in the platform. Please review our docs on supported integrations: Please review our docs on supported integrations.

SECRETS

SSH KEY

Issue:

Error while configuring the Linux server with ssh in Harness. ssh-server

Solution:

  1. The connection issue likely has something to do with the URL. For an AWS Linux box, it’s usually something like ec2-76-939-110-125.us-west-1.compute.amazonaws.com. For Azure, normally it would be something like ssh -i ~/.ssh/id_rsa azureuser@10.111.12.123 so in Harness try it without the https:// scheme.:
  2. The SSH key in your screenshot looks like it’s in NextGen. You can also use a Shell Script step in NextGen.
  3. In Harness CurrentGen you can deploy to any Linux VM using our SSH Deployment Type, you can also use Azure VMSS.
  4. You can deploy to a physical server.
  5. If you’re just looking to copy files as part of a 'workflow', you can use a Shell Script step.
  6. For artifact copy, as opposed to deployment, you can use the SSH Service and Copy Artifact Command.

COMMON ISSUES

Issue:

Hi, need some help with the following questions:

  1. In Harness cd while deploying helm chart [from an HTTP helm chart], can I upload custom values.yaml file which is at a different location?
  2. Installed a helm chart on an aks cluster using a Harness but tried to run the helm list locally in my terminal unable to see the release name in the cmd output?
  3. In Harness CD, can I give a custom release name while installing the helm chart in the pipeline?

Solution:

To answer your question:

  1. You can do that, refer to the section values.yaml
  2. By default the app is installed under the Harness namespace so you need to add -n harness to the Helm command. Try:
helm list -n harness
  1. Yes, we can do that custom-release-name

Issue:

Can Harness CD deploy a helm chart and support kustomize patch on top of the helm chart? Are helm charts deployed by Harness visible using the helm CLI on the targets?

Solution:

  1. We can fetch Helm charts for Kustomize Deployments. We can also apply Patches to those kustomize deployments. Harness Agent has its own Helm Client, we use that Helm client to query the deployed resources associated with the chart. With Kustomize and Helm charts, Harness wouldn’t deploy using Helm, we would deploy using the kustomize cli and we can apply patches. We would track the resources with the labels we apply to the Kubernetes objects we deploy.
  2. You could run a helm list -To list all the helm charts, that’s no problem. However, Harness will manage the state and take care of rollback for you vs the helm client and tiller. We already know what the previous resources are so in event of deployment failure we can roll back to the last known healthy state. We have an Argo CD Integration where we allow you to leverage your existing ArgoCD cluster and manage it with Harness and Integrate it natively with Harness CI.

Harness can use ArgoCD for GitOps

Harness can manage and orchestrate the Deployments out of the box without having ArgoCD cluster management. We manage the deployed resources on the cluster, we have a slew of integrations with Kubernetes, Helm, Kustomize, and Openshift, We give you Canary, Blue-Green, and Rolling Deployment Logic out of the box. We integrate with ArgoCD if you want that style of deployment with GitOps. Quick reference doc:

  1. Harness Argocd GitOps quickstart
  2. Use kustomize for Kubernetes deployments

Need further help?

Feel free to ask questions at community.harness.io or join community slack to chat with our engineers in product-specific channels like:

#continuous-integration Get support regarding the CI Module of Harness.

#continuous-delivery Get support regarding the CD Module of Harness.

· 3 min read
Debabrata Panigrahi

This tutorial aims at enabling users to set up a Kubernetes cluster on AWS and will serve as the foundation for your CI/CD pipeline infrastructure. After the infrastructure is ready on a free account, you can proceed to create and install a Delegate.

Credits for AWS

To avail free credits in AWS please follow the following resources:

If you are a student please sign in using AWS Educate

Note: Under the AWS Free tier the EKS service is not available, so it’s suggested to get some free credits and use them for EKS.

Pre-requisites:

There are certain requirements in terms of access and permissions and memory resources for the delegate to function properly.

Creating a Cluster:

Considering you are a first-time user, please consider the following specifications along with the above prerequisites, while creating a cluster:

  • Number of nodes: minimum of 3.
  • Machine type: 4vCPU
  • Memory: 12GB RAM and 6GB Disk Space. 8GB RAM is for the Delegate. The remaining memory is for Kubernetes and containers.
  • Networking: Outbound HTTPS for the Harness connection, and to connect to any container image repo. Allow TCP port 22 for SSH.

For creating a cluster follow the steps mentioned in the documentation, also you can take the help of the demo in the video below.

You will be able to see your cluster, after creation on the management console, like the picture below.

AWS Dashboard

Authenticate to the cluster:

  1. Open a terminal and navigate to where the Delegate file is located.
  2. You will connect to your cluster using the terminal so you can simply run the YAML file on the cluster.

AWS Access

  1. In the same terminal, log into your Kubernetes cluster. In most platforms, you select the cluster, click Connect, and copy the access command.

AWS Configure

  1. Next, install the Harness Delegate using the harness-delegate.yaml file you just downloaded. In the terminal connected to your cluster, run this command:

    kubectl apply -f harness-delegate.yml
  2. The successful output would look like this

delegate-install

  1. To validate run the following command and check.

    # kubectl get namespaces
    NAME STATUS AGE
    default Active 29h
    harness-delegate-ng Active 24m
    kube-node-lease Active 29h
    kube-public Active 29h
    kube-system Active 29h

Also, you could check for pods under your AWS cluster to find the delegate

delegate pods

  1. Now that your cluster is operational, you may add resources to it by using the kubectl utility, as you can see. Please use Start Deploying in 5 Minutes with a Delegate-first Approach tutorial to install Delegate at this time and move forward with creating your CI/CD pipeline.

Warning: You have to exit the present pipeline without saving to view delegate details/continue with further steps.

  1. You could check about your delegates on the dashboard under Project Setup.

check-delegate

  1. The delegate details would look something similar to this

delegate-available

Note: Apart from the above mentioned way, there are other ways to install delegate on AWS, for eg using EC2.

Need further help?

Feel free to ask questions at community.harness.io or join community slack to chat with our engineers in product-specific channels like:

#continuous-delivery Get support regarding the CD Module of Harness. #continuous-integration Get support regarding the CI Module of Harness.

· 6 min read
Dhrubajyoti Chakraborty

Introduction

This beginner guide aims to help learners learn the configuration management for run steps settings in Harness CI. We will learn about different settings and permissions for the Harness CI Run Tests step which executes one or more tests on a container image.

Before We Begin

Are you confused with terminologies like Container Resources, Image Pull Policy etc while creating & configuring a run step for your CI pipeline? In this article we will discuss a few such terminologies on the Harness CI platform and how you can configure them to set up your run step in the pipeline settings according to your requirements.

Configuration Parameters in a Run Step

  • Name
  • ID
  • Description
  • Container Registry
  • Image
  • Namespaces
  • Build Tool
  • Language
  • Packages
  • Run Only Selected Tests
  • Test Annotations
  • Pre-Command & Post-Command
  • Report Paths & Environment Variables
  • Output variables
  • Image Pull Policy
  • Container Resources

Name

The unique name for the run step. Each run step must have a unique name & it is recommended to use a name which describes the step.

ID

Most of the Harness entities and resources include a unique Id also referred as entity Identifier that's not modifiable once the entity is created. It provides a constant way to refer to an entity and avoid issues that can arise when a name is changed.

Initially Harness automatically generates an identifier for an entity which can be modified when you are creating the entity but not after the entity is saved.

Even if you rename the entity the Identifier remains the same. The automatically generated Identifier is based on the entity name and follows the identifier naming conventions. If an entity name cannot be used because it's already occupied by another entity Harness automatically adds a prefix in the form of -1, -2, etc.

Check out the documentation to know more about Entity Identifier Reference

Description

This is generally a text string describing the run step and it’s working.

Container Registry

Container Registry refers to the Harness Connector for a container registry. This is the container registry for the image Harness will use run build commands on such as DockerHub.

Check out the documentation to know more about Harness Container Image Registry

Image

Image is the name of the Docker image to use when running commands.For example: alpine-node This image name should include the tag and will by default refer to the latest tag if not specified. You can use any docker image from any docker registry including docker images from private registries.

Different container registries has different name formats:

  • Docker Registry: enter the name of the artifact you want to deploy, such as library/tomcat. Wildcards are not supported.
  • GCR: enter the name of the artifact you want to deploy. Images in repos need to reference a path starting with the project ID that the artifact is in, for example: us.gcr.io/playground-243019/quickstart-image:latest.
  • ECR: enter the name of the artifact you want to deploy. Images in repos need to reference a path, for example: 40000005317.dkr.ecr.us-east-1.amazonaws.com/todolist:0.2.

Namespaces (C#)

A comma-separated list of the Namespace prefixes that you want to test.

Build Tool

This is where you select the build automation tool & the source code language to build, such as Java or C#.

Packages

This are the list of source code package prefixes separated by a comma. For example: com.company., io.company.migrations

Run Only Selected Tests

If this option is unchecked, Test Intelligence is disabled and all tests will run.

Test Annotations

This is where you enter the list of test annotations used in unit testing separated by commas. Any method annotated with this will be treated as a test method. The defaults are: org.junit.Test, org.junit.jupiter.api.Test, org.testng.annotations.Test

Pre-Command & Post-Command

In pre-command you enter the commands for setting up the environment before running the tests. For example, printenv prints all or part of the environment.

In post-command you enter the commands used for cleaning up the environment after running the tests. For example, sleep 600 suspends the process for 600 seconds.

Report Paths

This refers to the path to the file(s) that store results in the JUnit XML format. You can enter multiple paths. Glob is supported.

Environment Variables & Output Variables

Environment variables refers to the variables passed to the container as environment variables and used in the Commands.

Output variables expose Environment Variables for use by other steps/stages of the Pipeline. You can reference the output variable of a step using the step ID and the name of the variable in Output Variables.

output-var

Image Pull Policy

This is where you make the choice to set the pull policy for the image.

  • Always: The kubelet queries the container image registry to resolve the name to an image digest every time the kubelet launches a container. If the kubelet encounters an exact digest cached locally, it uses its cached image; otherwise, the kubelet downloads (pulls) the image with the resolved digest, and uses that image to launch the container.

  • If Not Present: The image is pulled only if it isn't already present locally.

  • Never: The kubelet assumes that the image exists locally and doesn't try to pull the image.

Container Resources

The container resources configuration specifies the maximum resources used by the container at runtime.

Limit Memory Maximum memory that the container can use. You can express memory as a plain integer or as a fixed-point number using the suffixes G or M. You can also use the power-of-two equivalents Gi and Mi.

Limit CPU The maximum number of cores that the container can use. CPU limits are measured in cpu units. Fractional requests are allowed: you can specify one hundred millicpu as 0.1 or 100m.

See Resource units in Kubernetes

note

This is not applicable in case you have opted for Hosted by Harness in your Infrastructure settings of the step.

Timeout

This specifies the timeframe until which the step shall execute. Once the timeout is reached the step fails and the Pipeline execution continues.

NOT ABLE TO TROUBLESHOOT THE ENCOUNTERED ERROR

In case the user is unable to troubleshoot the application error or pipeline execution failures the user can log/submit a ticket to Harness Support. To log a ticket follow the process:

  1. Click the Help button in the Harness Manager
  2. Click Submit a Ticket or Send Screenshot
  3. Fill out the pop up form and click Submit Ticket or Send Feedback

· 4 min read
Hrittik Roy

Creating a delegate requires the creation of an infrastructure in which computational tasks can take place. The infrastructure is typically a Kubernetes cluster.

This tutorial shows you how to set up a Kubernetes cluster on Azure and will serve as the foundation for your CI/CD pipeline infrastructure. After the infrastructure is ready on a free account, you can proceed to create and install a Delegate.

Student Account

If you’re a student, you’re in luck as there is Azure for Students where you can sign in with your educational email address to create an account without a credit card to get $100 worth of credits.

These credits can be used to deploy the Kubernetes Cluster and other services if required.

To, get started with the account creation go to Azure for Students.

Step 1: Click on Activate Now

Activate Free Account

Step 2: After signing in with a Microsoft account, enter your educational email address:

Activate Azure for Students

Step 3: Sign in to Azure Portal!

A free Azure Account

For anyone who can verify their identity with a phone number and a credit card, Azure offers a free account with $200 in Azure credit. Once your account has been verified, you can create a Kubernetes cluster in it.

Step 1: Go to the Azure Free Account Page

Step 2: Click on Start free to start the account creation procedure

Azure Free Account

Step 3: Fill in the following fields

Fill Details

Step 4: Once your details are in click on Sign Up after you have accepted the terms and conditions.

Sign Up Credit Card

Step 5: Verify your phone number

Step 6: Put in your CC details and depending upon your Region a small amount will be deducted and refunded for verification.

Step 7: You can access your account using the Azure Portal

Azure Portal

Azure portal is the web-based management console for Microsoft Azure. It provides a single, unified view of all your Azure resources, including compute, storage, networking, and security. You can use the Azure portal to deploy and manage your Azure resources and to monitor their health and usage.

Azure Portal

You will use the portal to create your Kubernetes Cluster and connect to it.

Create a Cluster

The steps to create a cluster will be to use the Azure Kubernetes Service which is the managed Kubernetes offering from Azure. The steps are as follows:

Step 1: Click on Create a Resource after signing in

Create a Resource

Step 2: Search Container and then click on Kubernetes Service

Find Kubernetes Service

Step 3: Click on Create

Create Kubernetes Service

Step 4: On the Basics page, configure the following options for a Delegate to Run:

  • Project details:
    • Select an Azure Subscription.
    • Select or create an Azure Resource group, such as DelegateGroup.
  • Cluster details:
    • Enter a Kubernetes cluster name, such as myEnviroment.
    • Select a Region for the AKS cluster
    • Select 99.5% for API server availability for lower cost
  • Go to Scale Method and change it to Manual as your account might not have sufficient compute quota for autoscaling. Next change the Node Count to 2 image

Step 5: Start the resource validation by clicking Review + Create on your portal. Once validated, click Create to begin the process of cluster creation. Wait a few minutes for the cluster to deploy.

Connect to your cluster

Now, when your cluster is ready you can connect to the Azure Cloud Shel on your portal and open the terminal

Cloud Shell

Navigate to your cluster and click on Connect!

Connect to Cluster

Follow the steps displayed on the right panel and then you can connect to your cluster!

Run kubectl cluster-info to display details on your cluster!

Next Steps

Now that your cluster is operational, you may add resources to it by using the kubectl utility, as you can see. Please use Start Deploying in 5 Minutes with a Delegate-first Approach tutorial to install Delegate at this time and move forward with creating your CI/CD pipeline.

Need further help?

Feel free to ask questions at community.harness.io or join community slack to chat with our engineers in product-specific channels like:

· 7 min read
Dhrubajyoti Chakraborty

Introduction

This is a guide to get started with the common use cases for the end users to implement pipeline executions successfully in Harness CI

Harness provides various sources & tools to easily troubleshoot frequently encountered errors to fix pipeline failures. This guide lists some of the common issues faced while implementing and designing pipelines in Harness CI and the possible solutions.

What’ll we be covering here?

  • Syntax Verification
  • Variable Verification
  • Troubleshooting Delegate Installation Errors
  • Troubleshooting Triggers Errors
  • Troubleshooting Git Experience Errors

Verify Syntax

An early-stage error use case is generally incorrect syntax. In such cases, the pipeline returns an invalid YAML syntax message and does not start running in case any syntax error is detected.

Edit pipeline.yaml in the pipeline studio

The pipeline editor in the YAML view is recommended for editing experience (rather than the graphical stage view). Major features in the editor include:

  • Creation of connectors, secrets & pipelines from scratch
  • Realtime schema validation
  • Intellisense & auto-completion
  • Field descriptions & rich inline documentation
  • Free Templates for YAML Samples

This feature helps the developer to validate the existing pipeline’s correctness and helps in quick modification alongside copying of a pipeline as a code.

Verify Variables

A very integral part of troubleshooting errors in Harness CI is to verify the variables present in the pipeline and their values. Major configuration in the pipeline depends on the variables and verifying them becomes the easiest way to reach the root cause and potential solution of the problem.

Visit the variables section on the Harness CI platform. Check if the expected variables and their values match and are implemented at the expected stage for the pipeline.

Delegate Setup Failure

The majority of the encountered errors in Harness CI revolve around delegate setup processes. Make sure you have a complete understanding of how to set up a harness delegate from scratch & understand how the Harness manager and delegate complement each other.

Delegate setup also fails if the SSH key used for deployment to the targeted host is incorrect. This usually happens due to incorrect information about the SSH configuration in the Harness Secrets Management and also if the targeted host is not configured to support SSH connections.

For troubleshooting move to the watcher.log file that provides information about the deleted version.

Delegate fails to establish a connection with the Harness Manager

In case of connection failures for the delegate with the Harness Manager, we can use ping on the delegate host to test the response time for app.harness.io and other URLs are consistent or not. We can use the traceroute to check the network route and verify if there is any case of redirection. To verify if the DNS resolution is working fine we can implement nslookup. We can flush the client's DNS cache (Check for the OS). We can run tests to check for local network errors or NAT license limits. In the case of cloud service providers, we’ll have to ensure that the security groups have outbound traffic allowed on HTTP 443.

No eligible delegate found for the assigned pipeline execution

This error is encountered when the delegate fails to achieve the URL criteria for validation. All delegates in harness are identified by their Harness account ID with some additional factors. For example, in VMs the delegates are identified with the combination of their hostname and IP address thus in case the IP changes the Harness Manager fails to identify the delegate. In the case of K8s and ECS delegates, the IP changes when the pod is rescheduled.

The delegate sends the heartbeat, deployment data & Time series, and log data for continuous verification to the Harness Manager. The credentials used by the Delegate must have the roles and permissions required to perform the task. For example, if the account used for an AWS Cloud Provider does not have the roles required for ECS deployments then it will fail.

For more information visit How does Harness Manager Identify Delegates?

K8s Delegate Deletion Failure

To delete the Harness Delegate from the K8s cluster we’ll have to delete the StateFulSet for the delegate. This ensures that the expected number of pods is running and available. Deletion of the delegate without deletion of the StateFulSet results in a recreation of the pod.

For example, if the delegate name is delegate-sample then we can delete StateFulSet with the command below

$ kubectl delete statefulset -n harness-delegate delegate-sample

Triggers Rejection Failures

This usually happens when the user uses a webhook trigger to execute a pipeline or workflow and the name of the artifact in the cURL command is different from the name of the artifact.

Trigger Rejected. Reason: Artifacts Are Missing for Service Name(S)

This is majorly a result of a bad name for an artifact build version when placed in the cURL command. For example a cURL with build number v1.0.4-RC8:

curl -X POST -H 'content-type: application/json'
--url https://app.harness.io/gateway/api/webhooks/... . .
-d '{"application":"tavXGH . . z7POg","artifacts":[
{"service":"app","buildNumber":"v1.0.4-RC8"}]}'

In case the Harness Service artifacts have a different nomenclature the cURL command will fail to execute. Thus ensuring the webhook cURL command has the correct artifact name becomes very important.

Failure when executed Git Push in Harness

In case of two-way sync between the Git repository and the Harness Application, the push to Harness will result in failure unless the GIT YAML files and the required settings are configured before pushing the app to Harness.

For example, in case we have a predefined infrastructure definition and the required labels or parameters are not filled or filled in incorrectly the push to git is more likely to encounter a failure.

Using the Harness Manager to configure the app at first is the best way to encounter this error. This generally ensures that all the required settings are configured correctly and synced with the git repository.

Triggers: zsh: no matches found

In some of the OS versions specifically in MACOS, the default shell is zsh. The zsh shell requires the cURL command to not use the “?” character or put quotes around the URL.

For example;

curl -X POST -H 'content-type: application/json' --url https://app.harness.io/gateway/api/webhooks/xxx?accountId=xxx -d '{"application":"fCLnFhwsTryU-HEdKDVZ1g","parameters":{"Environment":"K8sv2","test":"foo"}}'

This shall work

curl -X POST -H 'content-type: application/json' --url "https://app.harness.io/gateway/api/webhooks/xxx?accountId=xxx -d '{"application":"fCLnFhwsTryU-HEdKDVZ1g","parameters":{"Environment":"K8sv2","test":"foo"}}'"

User does not have "Deployment: execute" permission

The error User does not have "Deployment: execute" permission reflects back to the user’s Application Permission > Settings does not involve execute. This can be solved by resolving the application permission configuration. The user can easily modify the Harness Configure as Code YAML files for the harness application.

To enable editing of the YAML file make sure the user’s Harness User Groups must have the account permission Manage Applications enabled. Also the Application Permissions Update enabled for specific applications

NOT ABLE TO TROUBLESHOOT THE ENCOUNTERED ERROR

In case the user is unable to troubleshoot the application error or pipeline execution failures the user can log/submit a ticket to Harness Support. To log a ticket follow the process:

  1. Click the Help button in the Harness Manager
  2. Click Submit a Ticket or Send Screenshot
  3. Fill out the pop up form and click Submit Ticket or Send Feedback