Monthly Archives: June 2015

Compound Types in Scala

Compound Types in Scala
In the previous post we looked at Mixin composition with traits in Scala. This post is about a similar concept, called Compound types. The syntax is exactly the same, it uses the familiar with keyword. But there is some differences. Mixins are used to define types or instantiating them. On the other hand, compound types are used to specify dependencies as a sub-type of more than one types.

Example Scenario

Let’s consider a scenario where a person is asked to cover some distances. He can cover the given distance while walking or running. Apparently the person loves walking more than running. In order to keep things interesting for him, he can only walk in the patches of 5000 ft. He cannot walk more than three patches for covering the distance and has to run the remaining distance.

Distance more then maximum 3 walkable patches

Distance more then maximum 3 walkable patches

There can be no partial patches unless the given distance doesn’t cover a single patch. After the patch, the remaining distance must be covered while running.

Partial Patch

Partial Patch

Putting into Code

Scala provides with keyword to specify dependencies in terms of compound types. Here we are first determining, how many patches the person can walk before starting to run. In order to determine that we are first creating a sequence of numbers between 1 and 3. collect is selected because we need to filter and map.

Here we are filtering all those patches which can be covered while walking. If a distance lies in between two patch boundaries, it would be filtered by collect. We are then mapping the distance to cover the given patch.

Then we are getting the maximum patch. Any partial distance can be covered by walking.

Mixin Class Compositions in Scala

Mixin Class Compositions in Scala

Scala allows a class to inherit from other classes. With this inheritence, it can also mix-in behaviors from other classes. This allows additional behavior to be added to the classes. Traits can be added to provide generalization of orthogonal behaviors then mixed-in to the class definitions.

Here we have introduced two traits, Walker and Runner. A person can be both a walker and runner, so we are mixing in these two traits to our Person class definition.

Here we are instantiating the Person class defined above. You can notice that we are using run and walk methods which have been mixed-in from Runner and Walker traits respectively.

Mixin compositions can also be used to define singleton objects. Here we have Person object using the mixin compositions using Walker and Runner. You can also notice the syntax to mixin multiple traits with their individual with keyword.

We can mixin traits while instantiating classes with added members from mixin traits. A human object would have everything Human class defines. Additionally, it can use members from Walker and Runner traits.

It can also be a case class. Here we have turned Human into a case class with name field. We can see that we are mixing in members from Walker and Runner while instantiating a Human.

Scala just allows traits mixins and we cannot mixin classes. An attempt to do so would result in a compile time error as follows:


Traits Mixins with conflicting members
It is possible that traits have conflicting members. In the following example there are two traits Fixer and Doer with the same member doWork. Now which of these doWork would be picked up when we invoke it from the instance.

Actually Scala doesn’t like this conflict while mixing-ins. It simply results in a compile time error.


As the message suggests we need to override the conflicting members to get rid of the error. Let’s override this and provide a definition of the doWork method.

This makes compiler happy. But there is an interesting thing in the above code. We have called super.doWork(). So would it use it from Doer or Fixer. Actually scala has a straitforward policy to this, it would use the conflicting members from right to left while mixing in. Since we have Doer on the extreme right, it would use doWork from Doer.

Super when mixin

Super when mixin

Required Mixin for Instantiation
Scala also provides the support for mandatory trait mixing for a class instantiation. This is specially useful when we have more abstract definitions of trait and a more concrete trait is expected for the instances of the class.

In the above example, we have introduced a class Worker. It requires Fixer to be instantiated. We have also introduced a new trait AwesomeFixer which extends Fixer.

Required Mixin Compiler Error

Required Mixin Compiler Error

As we add the mixin for when instantiated, the compiler error goes away.


Refinements with Mixins

While doing the mixins, we can do further refinements. In the following example, we have introduced a case class XYZ with a field fullName. There is a method perform, which has a parameter performer specified in terms of a structural type with members name (field) and doWork (method). We have created instantiated obj from XYZ mixed in with Worker. Still it doesn’t fulfill the method parameter structural requirement. We can refine this with refinement by creating a new field name and assign it the fullName property from XYZ.

Scala’s Singleton Objects are powerful

Scala’s Singleton Objects are powerful
Scala provides the support for singletons by introducing object, which provides the anonymous type definition and its instantiation. The same instance is used throughout the life of application. This is different from the Gang of four’s singleton implementation that we are so used to. I have found there are various useful features which spawn from this great ability to create object.

Defining Singleton Hierarchies

As we have just discussed, Object are introduced to define singletons but we can also use them to introduce singleton hierarchies. In these hierarchies an Object would contain other Object (s) in a hierarchical fashion. We don’t need to define a corresponding type with container hierarchies.

Here we are just defining Key/ value pairs but scala’s singleton object allows us to further classify the key / value pairs. This is easier to maintain.

Singleton Hierarchies

Singleton Hierarchies

We also would be saved from the huge pain of long intellisense scroller as we hit dot on the key / value container. Here we are just trying to use MemberType, just notice how clean the intellisence looks. Isn’t it amazing?


Scala has an amazing support of importing the members of Singleton. Just import everything from it and we can use all fields and member functions defined in the singleton without any name qualifier. They just appear as local functions, which they certainly are not. Here we have imported the members of MemberType. You can notice that we are using fullTime as it is defined locally.


There is another way to do this. You don’t have to directly use an object. We can also use Selfless trait pattern. The pattern allows a companion object of a trait which has the same name as the trait.

Since Scala allows to import the members of an object, we can simply import the object. While writing the unit test, this can be specially useful.

Defining Utility methods

As we discussed above, when we import an object, all of its members are statically imported and we can use them directly. We can use this feature to define helper utility methods in objects and then import the object. Alternatively with Selfless trait pattern, we can define a trait and a companions object (i.e. same name). In any case, we need to import the object to have access to its members.

Now we can import the object members and use them as if they are local members. Here we are importing the members of HelperMethods object. Look how easily we are using greetings method without any preceding type / object name.

Debugging the code clearly shows that the method returns the message to greet a customer.

Utility Methods from Singleton Object

Utility Methods from Singleton Object

Tuple with named members

Tuples are specially useful when when we have to return more than one values from a method. But the fields in tuple are not named. Scala also supports tuple but the individual fields are still not named but underscored field indices values. So we end up with _1, _2,… in our code, which is not very readable. This also results in maintenance nightmare of our code base.

Object helps us here to solve this problem too. We can directly return an Object with named fields.

Although we don’t need to define the return type of the function; thanks to implicit type inference in scala. But since scala supports structural typing, we can do it if need be. An example case is to define a trait with structural type as return type of an overridable method.

Scala Macwire: Inject all Implementations of a trait for factory pattern

Scala Macwire: Inject all Implementations of a trait for factory pattern
Macwire is an amazing DI (Dependency Injection) container for scala. It allows to wire up the dependencies required by a type to instantiate itself as long as the type is registered with it.

If you are coming from C# background and are used to Unity framework then you must be used to register interfaces and implementations with Unity. Macwire is a bit different as we don’t need to provide such bindings. It automatically detects the binding between a trait and concrete type as long as the concrete type is registered with the macwire framework.

While implementing factories, you might be used to inject all the implementations of an abstract type (interface) in the factory constructor. Unity allows such injections if it finds more than one implementation of an interface. The problem with macwire is that it doesn’t automatically allows such injections. This is because of the way it works. It keeps a track of the registration in the current context and traits implemented by these registrations. If there is a request for injection a trait implementation in the constructor of a type being constructed, it just looks at these registration and injects the appropriate concrete type implementing the trait.

Macwire allows multiple registrations of concrete types even if they implement the same interface but it cannot directly inject them all even when a construction requires a collection of trait implementations. Au contraire, it results in a compile time error. In this post, we are trying to find a solution to this problem.

Let’s first add the macwire dependencies to our sbt file. It would make sure that the necessary libs are copied in your build.

As you compile it, you should be able to find these libraries in the External Libraries section of your project.



Now let’s introduce a trait named Operation. Here we have two implementations of the trait. They OperationFirst and OperationSecond. The trait requires implementation of a method execute. It also requires a field operationType identifying the type of operation. This can be used by factory responsible to provide Operation based instances.

OperationFactory provides an instance of an object of a type implementing Operation trait. It uses key to determine the required Operation type. In case, it is not able to find a type for the specified key, it throws a IllegalArgumentException.

We need to wire up these types using Macwire. Here MyModule is using Macwire’s wire[T] to register these types. As discussed above, it actually results in the binding between these types and traits implemented by these types. Here we are registering OperationFirst and OperationSecond, which should define their binding with Operation trait. We are also registering OpertationFactoryImpl. The module just has one public property. Accessing the property should result in the creation of the whole object graph from Macwire.

In the code below are getting the OperationFactory object from MyModule instance. We are trying to get an operation by providing “first”, “second” and “third” as keys for the factory. It looks like the code should work fine for “first” and “second” where factory can provide the OperationFirst and OperationSecond instances respectively. Since it is not able to find an implementation of Operation with “third” as OperationType, it should result in an exception. The code catches the exception and prints its message.

As we compile the code, the compiler claims for the required implementation of List of Operation based types. Actually Macwire is trying to find any such registrations. Since it is not able to find such registrations, it is resulting in this compile time failure. Yes, more than one registrations of type implementing Operation trait is of non help to Macwire.


It is actually very simple to fix. Since it is trying to find a List of Operation in the current context, we can provide this with it. Here we have just created a List of all wired up instances of Operation types. We can keep it lazy.

Now the code compile fines and behaves as expected when we debug it.



PartialFunction (Partially defined functions) In Scala

PartialFunction (Partially defined functions) In Scala
A partial function is a unary function which is only defined for certain domain (i.e. input) values. They can be introduced anonymously. They can also be defined using PartialFunction trait.

In order to understand partial functions, first introduce a type Student. It has three properties, id, sType and name to keep information about identity, type and name of a student.

As discussed above, we can define partial functions anonymously or using PartialFunction trait. In the following example we have introduced two partial functions, fullTimeStudentNameFunction and partTimeStudentNameFunction, which would return the name of a full-time or part-time student respectively. Here we are also using the function to get the name of the student provided.

Debugging the above code provides the correct result and we get the name of students from their respective functions.

Partial Function output

Partial Function output

Since a partial function is defined for only certain input values. A non-supported input results in a run-time failure as follows:

Scala Match Error

Scala Match Error

Partial Functions usage in Scala’s collection API:
Scala’s collection API specially has a few methods which need partial functions for their operation. Just look at the intellisence in IntelliJ Idea. Here andThen and collect methods expect a partial function to work on items in the collection.

Partial Functions & Collections

Partial Functions & Collections

collect method filters and maps the element passing the filter. It is more concise than providing the definition of both filter and map functions.

Here is the output of the above code. It filters and maps correctly. We are using the result to print the names of the students.


Let’s compare the results by providing a separate definition of filter and map. We can see that the results are the same in both cases.

Filter with Map & collect

Filter with Map & collect

Using Partial Functions with Map

Although a partial function can be used with collection’s map function but if there is an input which is not supported by the partial function, it results in the same match-error.


Chaining Partial Functions

Partial functions can also be chained using andThen and OrElse. In the following example, we are chaining two partial functions to support the whole domain of student types.



Partially Applied Functions in Scala

Partially Applied Functions in Scala
Partial applied functions allow us to set any parameter of a function. After partial functions, a function is created with the remaining parameters of the function. In this post we are going to discuss the details about this feature.

Let’s create a function which accepts three parameters. It uses the arguments for an algorithm and returns the result of the calculations.

Scala allows us to fix any argument, no matter its placements in the parameters list. Here we have a partially applied function add1 with its second argument fixed to 2. A partially applied function can further go through the same process creating more refined partially applied functions.

Just look at the definition of add2, which is created by fixing the first argument of add1. Now add2 is a Function1, which just uses the argument as the third parameter value. It just applies the previously fixed values for the calculation and the returns the result.

We can also use add1 as a Function2 e.g. sum2. We can just provide the values for the remaining arguments to get the result of the calculation.

Partially Applied Functions

Partially Applied Functions


Currying in Scala

Currying in Scala
In our .net blog [], we have discussed about currying in details. Just to quote from there:

Currying is a transformation technique in functional programming space. It is used to convert a function with multiple parameters to a set of functions with single parameter each. It can be used to create refined algorithms where partial list of arguments can be provided to create more sophisticated operations. The nomenclature is in the honor of Haskell Curry, the creator of Haskell functional programming language. It must be remembered that the end-result of a curried version of a function should exactly be the same as its non-curried counterpart.

Let’s consider a simple function. The function has three parameters of type Double. It just adds them and returns the result. We don’t know how this function was implemented as such but just think of it as a given and it would ease the pain :)

Unlike C#, Scala has an in-built support for currying. You can just call curried on a function definition and get your function to its curried counterpart, which is a series of single arguments functions. In the following image, we are calling curried, and hey look at the intellisense. It clearly shows what it would do to our function. It would convert add into a series of single parameter functions of Double type. At the end, it would return the result, which would also be of Double type.


Let’s use the curried version of our add function. Every function in the series of functions results in another Function1 except for the last one, which result in a Double type.

Let’s see exactly, how this series is being used by scala. Look at the transformations into different Function1 (s) and see how sum is assigned a value of type Double.


The opposite of currying is un-currying and scala definitely supports it. Here we are converting our curried add function back to its uncurried format. It accepts three parameters of double type and returns a result of type Double.

What about currying and method overloading?
For currying, it seems impossible to support function overloading in scala based on the syntax we discussed above. Scala has another syntax for currying where we just specify the number of arguments. We can use that syntax for currying in case of overloading. Although there is more typing involved with using that syntax, but that is the only way, unfortunately :(

This syntax is practically equivalent of the syntax we discussed above. This works in case when we don’t have an overload. In case of an overload, we have to declare it like a partial function (discussed later), otherwise, we get the following compile time failure:


Here we have two add functions with different parameters list. Here we need to specify the details of the parameters if we are to generate the curried versions.

Structural Sub Typing in Scala

Structural Sub Typing in Scala
Structural typing is compile time checked duck-typing. It allows us to specify the characteristics of a required type instead of an actual concrete type. Let’s create a new SBT project ScalaTyping. In this post we are going to discuss how Scala supports structural sub-typing. Let’s create a simple Scala SBT project.

Create SBT Project - Structural Typing

Create SBT Project – Structural Typing

Let’s introduce three types A, B and C. They are defined as singletons. They don’t share a common inheritence hierarchy. But they have one thing in common, all of them have printThis method defined.

These types have structural similarity which might or might not be semantically similar. What if we just need to use printThis method from objects of these types. Structure typing allows us to introduce code which is based on certain code structures. Scala’s structural typing allows us to introduce such behavior. In the following code, we are creating a List of objects which follow a certain structure. They must have printThis method available with the specified signature. Since A, B, and C fulfill this criteria, we can add their objects to the collection.

Let’s compile and run the code now. We can notice that the code compiles successfully. The correct methods of A, B and C are being called and messages are being printed after being formatted as specified in the individual methods.

Structural Typing Simple - Output

Structural Typing Simple – Output

This is only possible because of support of structural typing by scala. Otherwise, in a nominal typing system, in order to use these types in this fashion, they must inherit from a common super type with the required signatures. This makes it difficult when we are using these types from a third-party library supporting no extensions.

Structural Typing with Implicit Conversions

Now let’s introduce one more important aspect to this discussion by taking this one step further. How can we still use this when we want to use a type which doesn’t have such structural similarity. Here we have another type D, which is not structurally similar to the other types. This can be an example of a type which we might have from some other library. This is possible that the type is structurally not similar but semantically very similar. It has other methods which can be mapped to the required definition of our structured type.

The type has a method printOnce, which is very similar to our structural requirement for the collection. We just want the method to be used instead of printThis. But if we add D to the same collection, the scala compiler doesn’t seem very happy. This is simply checking the structural type similarity, which D doesn’t clearly have.

Type D - Compilation Error

Type D – Compilation Error

We can get around this with using implicit conversion discussed in the previous post. Here we are introducing the implicit conversion between a type with printThis to a type with printOnce. We are also specifying what should be done when printThis is called, it would just call printThis method passing through the arguments passed.

After introducing the above implicit conversion, just make sure that you have your import statement right. When we run the code now, it correctly uses printOnce method following the implicit conversion. Compiler is also happy because of introduction of this implicit conversion.



Views / Implicit Conversions in Scala

Views / Implicit Conversions in Scala
Scala supports implicit conversion by providing the support for implicit methods. In this post we are going to discuss briefly about this feature. Let us first create a Scala SBT project. We are naming the project as ScalaImplicits.

Scala Implicit Conversion

Create Project

Let’s add a type Student. It is a case class with two properties. They are Id (int) and name (String).

Now let us provide an implementation of App. In the example below we are trying to instantiate a Student by assigning a string text to it. Off course, we cannot do that. We need to instantiate it by providing the values of id and name. Is this possible to just assign a string like this?

Building your code, would result in a compile time error. How can we still make it work?


Scala has an amazing implicit conversion feature. This is how it works. We need to provide a method definition using implicit keyword. In this method, we specify the details how such conversion should take place. Here in the example below, we are converting a text to Student object. The text is provided colon-separated in the format (id : name).

Now we just need to import the namespace with the object name wherever we need to use this conversion. This would help compiler figure out how to handle such conversion. Now the compiler should be happy!


Apache Thrift And its usage in C#

Apache Thrift And its usage in C#

Services are based on the idea of RPC (Remote Procedure Calls). Apache Thrift is just another framework allowing us to write services but it makes it easier to write them by enabling polyglot development.

Apache Thrift enables writing cross language services development.

Why do we need Apache Thrift

One of the benefits of using thrifts is Data Normalization. If the services are implemented in different languages, there is no way a client can send an object which could be picked up by the receiver and she understands it unless it is serialized into a universal format understood by both.

The most famous universal formats for polyglot application development are JSON and Protobuf.

JSON is an excellent choice but there are problems with using JSON specially when we need compile time checking. Most of the time we end-up generating types in C#, Java or language of your choice to convert JSON into the object type. This is an extra work repeated for each project.

Protobuf allows us to write type definitions in a standard format in *.proto files. There are tools available to generate types for your language (Java / C# / Python) based on these definitions. But Protobuf is not a service framework. We need another framework to host services for us like WCF or a REST based framework.

Apache Thrift, on the other hand, provides support of serialization / deserialization in the format of our choice. It also provides a framework for hosting our services.

How Apache Thrift Handles it?

“All problems in software are solved by adding a layer of indirection”. Here the layer of indirection is introducing an Interface Definition Language, commonly referred to as IDL. This is used to define the types which we need to send on the wire. These are just like WCF data contracts but they are defined in a different syntax, plus this would be used to generate codes for different languages. We can then use thrift compiler to compile these types into the language of our choice. The compiler generates the source code for these types with complete serialization / deserialization logic. You can just add this code to your project.

Apache Thrift Interface Definition Language (IDL) compiler

Thrift IDL Compiler

Serialization Formats

Apache Thrift supports a number of serialization formats. They include:

  1. Binary: For better speed
  2. JSON: for readability of serialized objects
  3. Compact: For reduced size

Apache Thrift Services

Thrift also allows services to be defined in the same IDL formats. They can then be implemented in the languages of your choice. These services can then be hosted by one of the servers provided. The available servers are as follows:

  1. TSimpleServer
  2. TThreadedServer
  3. TThreadPoolServer

All of them inherit TServer available in Thrift assembly.

In the service oriented scenario, first you would be defining all type used across the boundaries in IDL format. Now you can use Thrift compile to generate code for the languages used by clients and service. You can add the code generated for client in the client project. We also need to add logic to send message to the service.

Now you can implement service in the same IDL format. Again, we need to compile it into the language used for server implementation. Now we can use a server provided by thrift library to host this service. All the stage is set. When we call service methods. The client code serializes the objects using thrift logic, sends it over the wire to the Thrift service. On the service side, thrift deserializes the message and pass it to the server code.

Now let us compile the code into the language of our choice. Thrift compiler can generate code in a number of languages including CSharp (support for .net CLR), Java (support for JVM), Python, Java Script and a number of other languages. Here we are generating code for CSharp. Thrift creates a separate folder for each language. You can notice a folder gen-csharp, which contains the generated client and server code for csharp for the IDL service used.


Primitive Types
IDL has a number of available primitive types. They are as follows:

  1. bool: A boolean value (true or false)
  2. byte: An 8-bit signed integer
  3. i16: A 16-bit signed integer
  4. i32: A 32-bit signed integer
  5. i64: A 64-bit signed integer
  6. double: A 64-bit floating point number
  7. string: A text string encoded using UTF-8 encoding

There is no default DateTime type available in thrift. But we can always keep the value of the DateTime in a standard numeric or string format and parse it on the other end of the service.

Required / Optional Fields
Abstract Data Types (classes) are defined in thrift using struct keyword. They can include primitive, enum and other struct types. All the fields are required (by default). Using ‘optional’ keyword is used to define non-required fields. All non-optional fields are added as constructor parameters of a type, which makes sense as if it is a required field then it is not possible to construct an instance without specifying the field’s value.

Apache Thrift Nuget Package

In order to use Thrift client or server, we first need to download the package required to handle thrift framework. For .net framework, the package is available as a nuget package.

Apache Thrift Nuget Pacakge

Apache Thrift Nuget Pacakge

Implementing Thrift Server in C#

The generated code in C# has an interface Iface with the signatures we specified in the service definition IDL file. We just need to implement the interface to provide service definition. But first we need to add the generated code in our C# Server project.


Now we can just provide the implementation of Iface interface to implement described service.

We need to host the service. As we discussed above, Thrift provides us the servers to host the generated services. Here we are using the above implementation of the service hosting it on port 9090. We are using TThreadPoolServer to host our service.

Implementing Thrift Client in C#

A thrift service hosted can be consumed by any client irrespective of the language as long as a generator is available for it. Since we want to consume the service in C# too, we can just add the same generated code to the client project as well. I would recommend to keep it in the same folder and reference the code as a link in both client and server projects.

Thrift Client C#

Thrift Client C#

Now we need to use the service to get the result of our requests. We can provide the following simple code to use the above service.

Now we just need to run the service. As we run the above client, the service provides the following responses, which we are formatting and printing to the console.


Download Code