Profile Picture

Sirwan Afifi

Stories from a web developer.

© 2019. Sirwan Afifi All rights reserved.

Hosting a WCF Service as a Windows Service Using Topshelf

You might be wondering why I am blogging about WCF. Is it still relevant? This is part of a legacy WCF project, and I am responsible for adding some functionalities to it, I like it though. By the way, it’s been a long time since it went out of fashion, but many large enterprise applications still use it.

As you know, WCF services need to run in a host process so when clients want to consume the services we need to make sure the services are alive. The host process needs to provide a host, and this host is responsible for setting up the services and listening for incoming messages then creating instances of the service and respond to the client by dispatching a call to the service class. As I mentioned in a legacy application we wanted to host our services as a Windows Service, this app had been using a Console application, but the problem with Console applications is that you need to make sure the app is open all the time. For example, if the server gets restarted you should manually open the app, you could say we should add this app as a startup process so whenever the system boots up this app is opened but we can achieve a better result by writing a Windows service instead. Windows services are a great way to run code in the background; this means that we don’t need a Console Application to run the application. Once you installed the service it keeps running, we can control how to start the service for example when can set it to automatically started when the system boots or be configured when a user logs in. Both of these approaches are considered as two hosting options because they are self-hosted applications; it means that both are running inside a .NET process.

Installing Topshelf

Topshelf is an open source .NET Windows Service library, it makes the process of creating Windows services much easier for us so that we can only focus on the service functionality as opposed to setting up the boilerplate service code. To install TopShelf all we need to do is installing its NuGet package:

Install-Package Topshelf

The next step is wrapping your service functionality inside a class with two methods Start and Stop these methods are going to be used by TopShelf to start and stop the service:

public class MyService
{
    private ServiceHost usersHost;

    public bool Start()
    {

        try
        {
            usersHost = new ServiceHost(typeof(UsersService.UsersService));

            usersHost.Open();
            Console.WriteLine("Service Running...");
            Console.WriteLine("Press a key to quit");

            return true;
        }
        catch (Exception ex)
        {

            return false;
        }
        finally
        {
            usersHost.Close();
        }
    }

    public bool Stop()
    {
        usersHost.Close();

        return true;
    }
}

The next step is adding this class to TopShelf for creating our Windows Service:

public class Program
{
    static void Main(string[] args)
    {
        HostFactory.Run(serviceConfig =>
        {
            serviceConfig.Service<MyService>(serviceInstance =>
            {
                serviceInstance.ConstructUsing(() => new MyService());
                serviceInstance.WhenStarted(execute => execute.Start());
                serviceInstance.WhenStopped(execute => execute.Stop());
            });

            serviceConfig.SetServiceName("My Service");
            serviceConfig.SetDisplayName("My Service");
            serviceConfig.SetDescription("Hosting WCF services");

            serviceConfig.StartAutomatically();
        });
    }
}

Installing our service into Windows

  • Run Command Prompt as Admin
  • cd into to bin\Debug folder
  • {AssebmlyName}.exe install

Adding NLog

We can add logging functionality to the mix using NLog, for doing so we first need to add the following package:

Install-Package Topshelf.NLog

The next step is to add the following configuration to app.config file:

<configSections>
   <section name="nlog" type="NLog.Config.ConfigSectionHandler, NLog"/>
</configSections>

<nlog>
    <targets>
      <target name="consoleTarget" type="Console" />
    </targets>
    <rules>
      <logger name="*" minlevel="Debug" writeTo="consoleTarget" />
    </rules>
</nlog>

Then we need to register NLog service in the Program.cs file:

public class Program
{
    static void Main(string[] args)
    {
        HostFactory.Run(serviceConfig =>
        {
            serviceConfig.UseNLog();

            // as before
        });
    }
}

Now we can use the logger in our service:

public class MyService
{
    // our service declarations

    private static readonly LogWriter _log = HostLogger.Get<MyService>();

     public bool Start()
        {
            try
            {
               _log.Info("Starting services");

What's Elasticsearch

Wikipedia:

Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is developed in Java and is released as open source under the terms of the Apache License. Official clients are available in Java, .NET (C#), PHP, Python, Apache Groovy, Ruby and many other languages.5 According to the DB-Engines ranking, Elasticsearch is the most popular enterprise search engine followed by Apache Solr, also based on Lucene.

Executing SELECT * FROM …. all the time consumes a lot of CPU also it doesn’t have index, One solution would be using FTS (Full Text Search) in RDBMS but in NoSQL world If you need a high performance search engine you’d better use Elasticsearch.

Elastic Stack

History

  • 1999 - Lucene
    • It helped all search engines back then index the data that they were adjusting from the internet and provide reasonable ways of retrieving that information based fuzzy matching.
  • 2004 - Compass
    • Built on top of Lucene, the same services but in a more scalable manner, idea was to provide a distributed search solution.
  • 2010 - Elasticsearch
    • Distributed, RESTful search and analytical engine

Use cases

  • Security/log analytics
  • Marketing = Use this data to find things:
    • How people find our website?
    • Where they came from?
    • What device they are using?
    • What part of the world they are coming from?
  • Search = ES was built with idea of great search engine

Concepts

  • Near Real Time (NRT)
  • Cluster:
    • collection of our nodes
    • has a unique name
  • Node:
    • part of the cluster to store the data
    • has a unique name
  • Index:
    • a collection of similar documents
  • Type:
    • a category or a partition of your index
  • Document:
    • in JSON format
    • customer
    • event

Querying

  • Simple query:
    • get all accounts: GET bank/account/_search
    • get all accounts in state of CA:
      GET bank/account/_search
      {
        "query": {
            "match": {
                "state": "CA"
            }
        }
      }
      

      - multiple conditions:

      GET bank/account/_search
      {
        "query": {
            "bool": {
                "state": "CA"
            }
        }
      }
      
    • boost: 3 = three times more important than state
    • _score: How ES identifies the relevance of a document based on your search query

Bulk loading data into Elasticsearch

_bulk: The endpoint for bulk api, this is where we send request to when we want to bulk load data. it expects new-line delimited JSON data (including a new-line at the very end, which is important, otherwise we’ll get errors). It allows us to Index, Create, Delete, Update. When using this we need to make sure we are using --data-binary flag from the curl command.

  • /_bulk
  • new-line JSON
  • Index, Create, Delete, Update
  • –data-binary

How to bulk load data

  • Create a file that has some data in it:
{ "index" : { "_index" : "indexName", "_type" : "typeName", "_id": "1" }}
{"title":"Web Developer II","author":"Chrysler Clerk","content":"Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Proin risus. Praesent lectus.","publishedDate":"2018-02-03T17:51:14Z"}
  • Load the data into Elasticsearch using curl:
curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk --data-binary "@reqs"; echo

GraphQL

Recently I have been working on a node stack project as a full stack JavaScript developer, it’s a great experience because I’m working with talented developers. We use TypeScript on both backend and frontend which is great because I’m coming from a .NET background and couldn’t be happier; Well we have types for our JS code :) But sometimes TypeScript drives me crazy for example when working with existing React libraries there are no type declarations and a lot of TypeScript’s benefits disappear so we have to write our own .d.ts file but it is worth it :) On the backend we use GraphQL for our APIs, so in this post, I explain my observation about this technology.

What’s GraphQL?

GraphQL

GraphQL is a query language for any kind of API and can fulfil any queries across multiple databases. The main benefit of using it is that you can ask for exactly what you want and you will get the result and nothing else. In this case, clients describe what they want their data and shape of it. The good point is that requests are validated against so-called Schema, we create this schema on our server, it basically describes the functionality available to our clients, inside this schema, we define our type definitions:

type User {
    id: ID!
    firstName: String
    lastName: String
    age: Int
}

type Query {
    users: [User]
}

In the schema, we need a top-level type called Query. The server defines the queries it can accept. So, in this case, we’re saying we need to return a list of users, the result is an array of type User.

Resolvers

Now we need a Resolver to figure out what we get back when we call users query. Resolvers are basically some functions that respond to queries and mutations, they are the functions that give us the results from a query.

const root = {
  users: () => {
    return [
      { id: 1, firstName: "Sirwan", lastName: "Afifi", age: 29 },
      { id: 2, firstName: "User 2", lastName: "lastName2", age: 20 },
      { id: 3, firstName: "User 3", lastName: "lastName3", age: 20 },
      { id: 4, firstName: "User 4", lastName: "lastName4", age: 20 },
      { id: 5, firstName: "User 5", lastName: "lastName5", age: 20 },
      { id: 6, firstName: "User 6", lastName: "lastName6", age: 20 },
      { id: 7, firstName: "User 7", lastName: "lastName7", age: 20 },
      { id: 8, firstName: "User 8", lastName: "lastName8", age: 20 }
    ];
  }
};

Now we can query the users to get the result, the query gets parsed and executed against a data source on the server and the server sends back the result as JSON:

GraphiQL

As you can see we have intellisense for our API. In fact, GraphQL is more like TypeScript for our API, by using it we have awesome static type analyze. In traditional REST we had many requests but with GraphQL we only have one single endpoint.

Mutation types

The query type is responsible for defining what will return when we call the query. With mutation type, we can mutate (change, create) data.

input UserInput {
    id: ID!
    firstName: String
    lastName: String
    age: Int
}

type Mutation {
    createUser(input: UserInput): User
}

The great point about a mutation is that we mutate something and we also ask for something in the result, that’s why we can specify a return type for a mutation, in this case, it’s User:

GraphiQL

GraphQL is a convenient way for a client to communicate with the server, There is much more to talk about this technology so I will pick up again in my next article.