Author Archives: Muhammad

Angular – Progressive Web Apps using Service Workers

Angular – Progressive Web Apps using Service Workers

Service workers enable creating progressive web applications. It is a script running in the web browser supporting caching of the application resources. They preserve the resources even after the user closes the tab and serves them when the application is requested again.

Setting up Angular Project

Let’s first create a sample angular app.

We can use ng serve to verify that the application has been created successfully.

We can open the application in the web browser and it does load successfully.

It must be remembered that ng serve does not work with service workers so we need to host the application externally through a web server. Let’s use http-server for this. It can simply be installed through npm.

Just make sure that you are able to run it successfully:

Now let’s run our sample PWA app using http-server. Here -c-1 would disable caching.

Adding Service Worker Support

Let’s first verify that our application couldn’t be served offline. We can set the browser to appear as offline through Chrome Dev Tools.

Now we can add support for service worker by adding @angular/pwa.

As we build the application using ng build –prod, we can see that the browser is able to load the application offline using Service Worker.

We can also see Network tab to see service worker in action:

go – segmentation fault: 11

go – segmentation fault: 11

Running go after a few days, I notice that it is crashing with segmentation 11 error. In this post, we will try to get rid of this error in order to run go on our machine. Actually, I need this as a dependency to run some other development tool.

Reading through the internet, it seems that the program is attempting to access a memory that it shouldn’t. Let’s see if running with sudo would be of any help. It doesn’t…

As microfocus explains, it can also be caused by mismatched binaries. This is possible that some new development tools have messed my dev environment up. Let’s first remove go and then reinstall it.

Now we can download the latest go version from here. Just to note that we are running this on macOS Catalina:

Let’s download and run through the installation:

Now you can notice that we are successfully able to run go.

Hyperledger Fabric – Terminology

Hyperledger Fabric – Terminology

Hyperledger Fabric is private and permissioned. In order to gain access, the peers need to be enrolled through a Membership Service Provider (MSP) . It has a deterministic consensus algorithm.

Asset

An asset holds a state and has ownership. They are key / value pairs representing a value, which enables to exchange anything with monetary value. They can be tangible or intangible. They can be modified using chaincode transactions on a channel ledger i.e. the state changes of assets are recorded as transaction on the ledger.

Chaincode

Chaincode is a software that defines the asset. It also defines the transaction instruction to modify the asset. The businesses sign off on the business logic. Chaincode allow these business to interact with the ledger.

Membership Service Provider

They are sometimes also referred as Membership Identity Services. They provides IDs to the authenticated participants which can be used to define permission to the ledger.

Access Control List (ACL)

ACL is used for defining authorization on the ledger and chaincodes. They use IDs provided by MSP.

Node Types

There are two node types. They are Peer nodes and ordering nodes. Peer nodes can batch execute and verify transaction. These transactions are ordered and propagated through ordering nodes. These node types provide efficiency and scalability. A consensus protocol is defined for ordering transactions.

Ordering nodes also maintain the list of organizations that are allowed to create channels. This list of organizations is called consortium. This list is kept in the “order system channel”. This list is administered by an orderer node.Orderer also enforce access control on channels i.e. the updates to the configuration of these channels. After update the configuration block is propogated to peers.

Fabric’s Ledger

Fabric’s ledger is based on both blockchain log and current state of database. The current state can be easily queried. The log is maintained to keep asset provenance in order to provide asset creation and state changes by various members.

There is one ledger per channel and each peer maintains a copy of ledger of the channel they are a member of.

Channels

Hyperledger uses channel to provide privacy to the ledger. Only the participants of the channel have visibility to the assets and transactions on the channel. This is unlike other blockchains where all participants have access to the public ledger. So channels are restrictive message paths defining the subset of participants.

Ordering Service Implementations

In the latest version of Hyperledger fabric, there are three implementations of ordering services including Solo, Kafka and Raft.

Multitenant Fabric

A fabric can have multiple consortiums(list of organizations allowed to create channels) rendering the blockchain as multi-tenant.

Hyperledger Fabric Composer & web sandbox

Hyperledger Fabric Composer & web sandbox

Hyperledger Composer is an framework that allows development of Hyperledger fabric based blockchain applications. It must be noted that it has been deprecated with Fabric 1.4 released in August 2019. This is the first long term support release for Hyperledger fabric. It is pledged by maintainers to provide bug fixes for a period of one year after the date of release.

Hyperledger Composer Web Sandbox

Hyperledger Composer also has an online sandbox that allows us to understand the concepts of Participant, Asset, Transactions and Events.

https://composer-playground.mybluemix.net/

Hyperledger Composer web sandbox

Hyperledger Composer web sandbox

Hyperledger Concepts

Hyperledger Fabric Composer has concepts of participants, assets, transactions and events.

Here is a participant created by instantiating SampleParticipant.

And here we have an asset created using SampleAsset. The asset is assigned with a sample assetId as required by the model.

We can create a transaction and publish it, which can later be loaded from history.

In this model, transaction emits an event with details about the older value and the new value being set for the asset identified by assetId. We can see the details about the event. In reality, this event can be used by applications.

Hyperledger Fabric Development using VSCode

Visual Studio Code has a number of extensions allowing the development of Hyperledger fabric based blockchain applications using Hyperledger composer.

Hyperledger Composer VS Extension

IBM Blockchain Platform

IBM Blockchain Platform

Kafka Connect – Externalizing Secrets – KIP 297

Kafka Connect – Externalizing Secrets – KIP 297

In order to connect with a data source or sink, we need to use credentials. Kafka Connect added support for specifying credential using config providers. The support for file config provider is available with the installation package. This is discussed in KIP 297. The KIP was released in Apache Kafka 2.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-297%3A+Externalizing+Secrets+for+Connect+Configurations

Here is the JIRA.

https://issues.apache.org/jira/browse/KAFKA-6886

Changes in worker properties file [etc/schema-registry/connect-avro-distributed.properties]

Fix to mask passwords from REST interface:

Since the connectors are exposed through the RESTful service from Connect, the user credentials will be returned as a response. An additional update was required to fix this. This was worked as KAFKA-5117.

https://issues.apache.org/jira/browse/KAFKA-5117

KAFKA-5117

KAFKA-5117

The fix for RESTful service was released in Kafka 2.1.x.
https://issues.apache.org/jira/browse/KAFKA-5117

If you are using Confluent Version, it should be available in 5.1.x.

https://docs.confluent.io/current/installation/versions-interoperability.html

Credentials File

Here is the credentials file couchbase.properties:

Kafka Connect Couchbase Connector – Document Expiration

Kafka Connect Couchbase Connector – Document Expiration

Couchbase supports document expiration. If the documents are ingested from Kafka through Couchbase Sink connector the expiration can be set at the connector side where we can set the expiration for each record upserted as document in couchbase.

Here is a sample configuration for the connector:

Here is a sample configuration from couchbase:

Sample Config from Couchbase: https://github.com/couchbase/kafka-connect-couchbase/blob/master/config/quickstart-couchbase-sink.properties

Securing JMX on Confluent Kafka

Securing JMX on Confluent Kafka

Confluent kafka process start with these default arguments. You can see that JMX authentication is disabled by default. This is a security vulnerability and might lead to possible issues.

Confluent kafka consists of the following services. We need to enable authentication for all of these services for JMX:

  1. Kafka Broker
  2. Zookeeper
  3. Schema Registry
  4. Kafka Rest
  5. KSQL
  6. Confluent Control Center

kafka-run-class

Kafka Broker, Zookeeper and Kafka REST use kafka-run-class. We can update it to enable authentication for JMX for these services as follows:

ksql-run-class

In order to update KSQL, you need to update ksql-run-class in bin folder of your confluent installation as follows:

control-center-run-class

On the Confluent Control Center (C3) server, following update is required in control-center-run-class:

schema-registry-run-class

Schema Registry uses schema-registry-run-class. We can update it as follows:

kafka-rest-run-class

kafka-rest-run-class is used to run KAFKA REST proxy. It can be updated as follows:

Kafka Tools – kafkacat – non-JVM Kafka producer / consumer

Kafka Tools – kafkacat – non-JVM Kafka producer / consumer

kafkacat is an amazing kafka tool based on librdkafka library, which is a C/C++ library for kafka. It means that it doesn’t have dependency on JVM to work with kafka data as administrator. It can be used to consume and produce messages from kafka topics. It also supports getting information about metadata information from kafka brokers and topics.

Getting kafkacat

The simplest way is to use docker. Confluent also has the image available on its repository. It’s also packaged and made available by others. You can find them here:

kafkacat-docker

kafkacat-docker

I have a mac. It can also be installed using homebrew. But if you don’t have development tools installed, it might show some errors as follows:

kafkacat homebrew

kafkacat homebrew

It was clear what needs to be done here. You can just install developer tools using xcode-select –install as follows:

xcode-select --install

xcode-select –install

And it goes through the steps for installation successfully as follows:

xcode-select intall

xcode-select intall

xcode-select intall -- progress

xcode-select intall — progress

Now the installation runs successfully. You can see that it also has installed the dependencies automatically including librdkafka.

brew install kafkacat

brew install kafkacat

Install jq

The outputs from the tool can also be requested in json format using -J switch. I would recommend you to install jq tool. The tool allows to pretty print the json on the console.

For Metadata

-L switch is used to get the metadata information. It can be used to get the metadata information for all topics in the cluster. You can also specify the particular topic you are interested in using the -t switch.

As Consumer

Most commonly, you might use kafkacat for consuming messages in an environment to see the details about the messages in a topic.

You can use the format expression with various parameters:

As Producer

In development environment specifically, one needs a tool to easily send messages on a topic to be consumed by a consumer service. kafkacat supports producing messages directly from the console.

It also supports loading the data from file and sending them to the topic. If we don’t use -l switch in this case, it would send the contents of whole file as one record.

Schema Registry Support

Apparently, there is still no schema registry support so please be careful with your messages in AVRO format.

schema-registry support

schema-registry support

Kafka Tools – kafka-delete-records

Kafka Tools – kafka-delete-records

While working with kafka we, sometimes, need to purge records from a topic. This is specially needed in a development environment where we just want to get rid of some records and want to keep the other ones. Previously we have discussed how we can purge all records from a topic by playing with the minimum retention configuration on it. This is useful when the records are far apart in time. Here is the reference of our discussion:

purging-data-blog

purging-data-blog

Kafka has one more extremely useful utility which also allows us to delete records. This is kafka-delete-records.sh tool. You can find it in the bin folder of your installation. Here I have Confluent Community Edition installed:

kafka-delete-records

kafka-delete-records

Let’s first create a topic my-records. The topic is loaded with messages in source-data.txt file. We have 15 lines in the file. The records are pushed to kafka using kafka-console-producer tool.

We can verify that the topic has actually been created using kafka-topics tool using –describe switch.

The tool also list number of partitions, replication factors and overridden configurations of the topic. Here is the output:

kafka-topic describe

kafka-topic describe

We can also verify that the messages are actually created by verifying the offsets of records in the topic. GetOffsetShell is another tool for this purpose. We have discussed about this previously. You can find the discussion here: http://www.alternatestack.com/development/app-development/kafka-tools-kafka-tools-getoffsetshell/.

kafka-delete-records supports and requires certain switches. This includes the cluster to connect and details about the records to delete. The records’ details can be specified in a json file passed to the tool using one of the switch.

Here is a sample format of the json file. You can note that we can even make it granular to the level of partition. Here offset is the minimum offset we want to keep. So all records before the specified offset will be purged from the topic’s partition.

Specifying incorrect topics always result in error. Here I kept the topic foo instead of my-records topic created above. Obviously, this resulted in: This server does not host this topic-partition.

Specifying non-existent partition also results in an error. Here I specified 1 instead of existing partition 0 on the topic.

Here I have played with values of offset in the json file. Please note the output:

In the end, the records did get deleted from the partition. We can verify using kafka-console-consumer tool. It appears that the tool was successful and did delete 3 records from the partition as intended.

If we need to delete all records from a partition, use -1 for offset in the configuration.