asdf gcloud ERROR: gcloud failed to load: No module named ‘_sqlite3’

Trying to use a gcloud asdf installation it fails with the error:

gcloud version
ERROR: gcloud failed to load: No module named '_sqlite3'

After installing various components (readline, tk etc) the last one to install to make gcloud work is liblzma-dev (sudo apt install liblzma-dev)

Then gcloud works

gcloud version
Google Cloud SDK 420.0.0
bq 2.0.86
bundled-python3-unix 3.9.16
core 2023.02.24
gcloud-crc32c 1.0.0
gsutil 5.20
Updates are available for some Google Cloud CLI components.  To install them,
please run:
  $ gcloud components update

Post Process Forwarder – KafkaError “Offset Out of Range (Kubernetes – Sentry – Helm)

Problem

After an upgrade of a self hosted instance of Sentry in Kubernetes with a Helm chart, you are getting the following error and one or more pods are keep failing with:

Post Process Forwarder - KafkaError "Offset Out of Range

Solution

There is a section in Sentry’s documentation here that describes the issue and leads to the comment here

The comment and the steps are using the –bootstrap-server 127.0.0.1:9092 flag which is the one that works.

It is also important to run the command in the group that you have the issue with (ie snuba-events-subscriptions-consumers) to fix this.

So find out the kafka-0 pod in your kubernetes and login to it

kubectl -n sentry exec -it sentry-kafka-0 -- /bin/bash

Get a list of the groups

I have no name!@sentry-kafka-0:/$ kafka-consumer-groups.sh --bootstrap-server 127.0.0.1:9092 --list
snuba-consumers
ingest-consumer
transactions_group
snuba-post-processor
snuba-events-subscriptions-consumers
subscriptions-commit-log-1de9aaa...
snuba-post-processor:sync:880fbbb...
subscriptions-commit-log-b755cccc...
snuba-replacers
query-subscription-consume
r

Run the command in the group you have the issue (snuba-events-subscriptions-consumers)

I have no name!@sentry-kafka-0:/$ kafka-consumer-groups.sh --bootstrap-server 127.0.0.1:9092 --group snuba-events-subscriptions-consumers --topic events --reset-offsets --to-latest --execute               

GROUP                          TOPIC                          PARTITION  NEW-OFFSET      
snuba-events-subscriptions-consumers events                         0          4834425   
  

Grafana blank graphs in dashboards after update

Upgrading Grafana (kube-prometheus-stack) from version 23.1 to the latest one currently 44.2.1 results in many Grafana graphs to disappear from the existing dashboards.

Since editing the actual graphs and trying to use ‘Run queries’ does not seem to work, a work around is the following:

  • Edit the blank graph.
  • Add a new query (B) – even empty is fine
  • Use the ‘Run queries’ (it does not matter if you use the button on the original one or in the new query B)
  • Delete the new query B
  • The graph should appear again, so use the ‘Apply’ button on top right.
  • Repeat the process for any additional graphs in the dashboard
  • When you finish ‘Save’ the dashboard
  • The dashboard should be working again

Removing gitlab’s container registry ‘repository scheduled for deletion’ message

If you want to remove the ‘repository scheduled for deletion’ message from a self hosted gitlab’s container registry installation, you can do the following.

Login to the toolbox pod

kubectl --kubeconfig /path/to/gitlab/kubeconfig -n gitlab-system exec -it gitlab-toolbox-pod-name -- bash

Start the rails console

cd /srv/gitlab
bundle exec rails console

Find the repository with it’s id and then get the registry for it

project=Project.find(project_id)
registry=project.container_repositories.first

The message is displayed when the status of the container is set to ‘delete_scheduled’, so change this to be null

registry.status=''
registry.save

Error: parse error at (gitlab-agent/templates/observability-secret.yaml:1): unclosed action

Problem

You are trying to install the gitlab kubernetes agent (kas) after getting the installation instructions from the Gitlab registration, but you are getting the following error:

Error: parse error at (gitlab-agent/templates/observability-secret.yaml:1): unclosed action

Solution

This is caused by the helm version used, so upgrading the helm version works with no issues.

 helm version
version.BuildInfo{Version:"v3.5.3", GitCommit:"041ce5a2c17a58be0fcd5f5e16fb3e7e95fea622", GitTreeState:"dirty", GoVersion:"go1.15.8"}

so trying to install gives the previous error.

Changing the helm version (ie with asdf)

asdf global helm 3.70


Checking the version

helm version
version.BuildInfo{Version:"v3.7.0", GitCommit:"eeac83883cb4014fe60267ec6373570374ce770b", GitTreeState:"clean", GoVersion:"go1.16.8"}

And running the helm installation again

helm upgrade --install gitlab-kas-agent gitlab/gitlab-agent \

--namespace gitlab-agent --set image.tag=v15.6.0 \

--set config.token=token_from_gitlab-registration \

--set config.kasAddress=wss://kas.domain_name.tld


Release "gitlab-kas-agent" does not exist. Installing it now.
NAME: gitlab-kas-agent
LAST DEPLOYED: Mon Dec  5 13:38:33 2022
NAMESPACE: gitlab-agent
STATUS: deployed
REVISION: 1

TEST SUITE: None

Exporting Gitlab project through the API

To export a project through Gitlab’s API, first create a Personal Access Token (PAT) and give api and repository permissions.


Export PAT as an environment variable.

Then find the project id (main page of the project – 1111).

To start an export use

Then check the status and get the download link when it has finished

curl --header "PRIVATE-TOKEN: $PAT" "https://your.gitlab.repo/api/v4/projects/1111/export"
{"id":1111,"description":"Project description","name":"project","name_with_namespace":"MyOrg / project","path":"project","path_with_namespace":"myorg/project","created_at":"2021-11-01T10:57:23.195+02:00","export_status":"finished","_links":{"api_url":"https://your.gitlab.repo/api/v4/projects/1111/export/download","web_url":"https://your.gitlab.repo/myorg/project/download_export"}}

Restic index invalid data returned error

Problem

It is possible after a broken network connection to get the restic error about invalid data returned for the index (maybe when using check).

Solution

You would need to rebuild the index, but using the –read-all-packs flag as described [here](https://forum.restic.net/t/fatal-load-index-xxxxxxxxx-invalid-data-returned/3596/27) which does the rebuild from scratch

restic rebuild-index -r $REPO --read-all-packs

PostgreSQL connection string for Percona PostgreSQL K8S operator

Since the documentation does not contain any information about how you can connect an existing application to the newly created percona pgo cluster, you can use something like the following in your pod postgresql connection string.

postgresql://username:password@cluster1.pgo-perc-production.svc.cluster.local/production

where cluster1.pgo-perc-production.svc.cluster.local points to your newly created cluster and the /production is the database to connect to.