Mastering Feature Flag Management with Azure Feature Manager

In the dynamic realm of software development, the power to adapt and refine your application’s features in real-time is a game-changer. Azure Feature Manager emerges as a potent tool in this scenario, empowering developers to effortlessly toggle features on or off directly from the cloud. This comprehensive guide delves into how Azure Feature Manager can revolutionize your feature flag control, enabling seamless feature introduction, rollback capabilities, A/B testing, and tailored user experiences.

Introduction to Azure Feature Manager

Azure Feature Manager is a sophisticated component of Azure App Configuration. It offers a unified platform for managing feature flags across various environments and applications. Its capabilities extend to gradual feature rollouts, audience targeting, and seamless integration with Azure Active Directory for enhanced access control.

Step-by-Step Guide to Azure App Configuration Setup

Initiating your journey with Azure Feature Manager begins with setting up an Azure App Configuration store. Follow these steps for a smooth setup:

  1. Create Your Azure App Configuration: Navigate to the Azure portal and initiate a new Azure App Configuration resource. Fill in the required details and proceed with creation.
  2. Secure Your Access Keys: Post-creation, access the “Access keys” section under your resource settings to retrieve the connection strings, crucial for your application’s connection to the Azure App Configuration.

Crafting Feature Flags

To leverage feature flags in your application:

  1. Within the Azure App Configuration resource, click on “Feature Manager” and then “+ Add” to introduce a new feature flag.
  2. Identify Your Feature Flag: Name it thoughtfully, as this identifier will be used within your application to assess the flag’s status

Application Integration Essentials

Installing Required NuGet Packages

Your application necessitates specific packages for Azure integration:

  • Microsoft.Extensions.Configuration.AzureAppConfiguration
  • Microsoft.FeatureManagement.AspNetCore

These can be added via your IDE or through the command line in your project directory:

dotnet add package Microsoft.Extensions.Configuration.AzureAppConfiguration
dotnet add package Microsoft.FeatureManagement.AspNetCore

Application Configuration

Modify your appsettings.json to include your Azure App Configuration connection string:

{
  "ConnectionStrings": {
    "AppConfig": "Endpoint=https://<your-resource-name>.azconfig.io;Id=<id>;Secret=<secret>"
  }
}

Further, in Program.cs (or Startup.cs for earlier .NET versions), ensure your application is configured to utilize Azure App Configuration and activate feature management:

var builder = WebApplication.CreateBuilder(args);

builder.Configuration.AddAzureAppConfiguration(options =>
{
    options.Connect(builder.Configuration["ConnectionStrings:AppConfig"])
           .UseFeatureFlags();
});

builder.Services.AddFeatureManagement();

Implementing Feature Flags

To verify a feature flag’s status within your code:

using Microsoft.FeatureManagement;

public class FeatureService
{
    private readonly IFeatureManager _featureManager;

    public FeatureService(IFeatureManager featureManager)
    {
        _featureManager = featureManager;
    }

    public async Task<bool> IsFeatureActive(string featureName)
    {
        return await _featureManager.IsEnabledAsync(featureName);
    }
}

Advanced Implementation: Custom Targeting Filter

Go to Azure and modify your feature flag

Make sure the “Default Percentage” is set to 0 and in this scenario we want to target specific user based on its email address

For user or group-specific targeting, We need to implement ITargetingContextAccessor. In below example we target based on its email address where the email address comes from JWT

using Microsoft.FeatureManagement.FeatureFilters;
using System.Security.Claims;

namespace SampleApp
{
    public class B2CTargetingContextAccessor : ITargetingContextAccessor
    {
        private const string TargetingContextLookup = "B2CTargetingContextAccessor.TargetingContext";
        private readonly IHttpContextAccessor _httpContextAccessor;

        public B2CTargetingContextAccessor(IHttpContextAccessor httpContextAccessor)
        {
            _httpContextAccessor = httpContextAccessor;
        }

        public ValueTask<TargetingContext> GetContextAsync()
        {
            HttpContext httpContext = _httpContextAccessor.HttpContext;

            //
            // Try cache lookup
            if (httpContext.Items.TryGetValue(TargetingContextLookup, out object value))
            {
                return new ValueTask<TargetingContext>((TargetingContext)value);
            }

            ClaimsPrincipal user = httpContext.User;

            //
            // Build targeting context based off user info
            TargetingContext targetingContext = new TargetingContext
            {
                UserId = user.FindFirst("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress")?.Value,
                Groups = new string[] { }
            };

            //
            // Cache for subsequent lookup
            httpContext.Items[TargetingContextLookup] = targetingContext;

            return new ValueTask<TargetingContext>(targetingContext);
        }
    }
}

in Program.cs (or Startup.cs for earlier .NET versions), modify your Feature Management to use targeting filter

    builder.Services.AddFeatureManagement().WithTargeting<B2CTargetingContextAccessor>();

You also need to pass the targeting context to the feature manager

using Microsoft.FeatureManagement;

public class FeatureService
{
    private readonly IFeatureManager _featureManager;
    private readonly ITargetingContextAccessor _targetContextAccessor;

    public FeatureService(IFeatureManager featureManager, ITargetingContextAccessor targetingContextAccessor)
    {
        _featureManager = featureManager;
_targetContextAccessor = targetingContextAccessor;
    }

    public async Task<bool> IsFeatureActive()
    {
        return await _featureManager.IsEnabledAsync("UseLocationWebhook", _targetContextAccessor);
    }
}

Simplifying API Testing in Postman: Auto-refresh OAuth Tokens with Pre-request Scripts

Introduction:

Welcome to a quick guide on enhancing your API testing workflow in Postman! If you frequently work with APIs that require OAuth tokens, you know the hassle of manually refreshing tokens. This blog post will show you how to automate this process using Pre-request scripts in Postman.

What You Need:

  • Postman installed on your system.
  • API credentials (Client ID, Client Secret) for the OAuth token endpoint.

Step 1: Setting Up Your Environment

  • Open Postman and select your workspace.
  • Go to the ‘Environments’ tab and create a new environment (e.g., “MyAPIEnvironment”).
  • Add variables like accessToken, clientId, clientSecret, and tokenUrl.

Step 2: Creating the Pre-request Script

  • Go to the ‘Pre-request Scripts’ tab in your request or collection.
  • Add the following JavaScript code:
if (!pm.environment.get('accessToken') || pm.environment.get('isTokenExpired')) {
    const getTokenRequest = {
        url: pm.environment.get('tokenUrl'),
        method: 'POST',
        header: 'Content-Type:application/x-www-form-urlencoded',
        body: {
            mode: 'urlencoded',
            urlencoded: [
                { key: 'client_id', value: pm.environment.get('clientId') },
                { key: 'client_secret', value: pm.environment.get('clientSecret') },
                { key: 'grant_type', value: 'client_credentials' }
            ]
        }
    };

    pm.sendRequest(getTokenRequest, (err, res) => {
        if (err) {
            console.log(err);
        } else {
            const jsonResponse = res.json();
            pm.environment.set('accessToken', jsonResponse.access_token);
            pm.environment.set('isTokenExpired', false);
        }
    });
}

Step 3: Using the Access Token in Your Requests

  • In the ‘Authorization’ tab of your API request, select ‘Bearer Token’ as the type.
  • For the token, use the {{accessToken}} variable.

Step 4: Testing and Verification

  • Send your API request.
  • The Pre-request script should automatically refresh the token if it’s not set or expired.
  • Check the Postman Console to debug or verify the token refresh process.

Conclusion: Automating token refresh in Postman saves time and reduces the error-prone process of manual token updates. With this simple Pre-request script, your OAuth token management becomes seamless, letting you focus more on testing and less on token management.

Further Reading:

Semantically Generating NuGet Package Versions: Best Practices Using Branch Conventions in Azure DevOps Pipelines

Learn how to streamline NuGet package versioning in Azure DevOps pipelines by generating semantic versions based on branch conventions. Proper versioning is essential for effective package management, and semantic versioning ensures compatibility and clear communication of changes.

a few main use cases for this e.g when you want to share schema of common objects or library across different micro services/API but at the same time you would be able to make a minor changes and try it on your micro service before it is being merged to master therefore you want to create a nuget package version that is just for development or testing purpose before it is going to be merged. This is all possible and is managed through versioning convention

A few things to look below are – the variables (Major, Minor, Patch, versionPatch, versionNumber) and you can also look at how we have a task to append “alpha” and you can also change to “beta” to the version variable when the branch is not master. You also need to set the versioningScheme on nuget pack to use the version variable that you defined above versionNumber

For the stable version now you can see in nuget package manager and make sure to untick “Prerelease”

While for the version comes off the branch now you need to tick “include prerelease”

Sample pipelines yml below

trigger:
  batch: true
  branches:
    include:
    - '*'

pool:
  vmImage: ubuntu-latest

variables:  
  projectName: 'Contoso.Messaging.csproj'
  projectPath: '**/Contoso.Messaging.csproj'
  buildPlatform: 'Any CPU'
  buildConfiguration: 'Release'
  Major: '1'
  Minor: '0'
  Patch: '0'
  versionPatch: $[counter(variables['Patch'], 0)]
  versionNumber: $(Major).$(Minor).$(versionPatch)

steps:

# Add this Command to Include the .NET 6 SDK
- task: UseDotNet@2
  displayName: Use .NET 6.0
  inputs:
    packageType: 'sdk'
    version: '6.0.x'

- task: DotNetCoreCLI@2
  displayName: 'Restore'
  inputs:
    command: 'restore'
    projects: '$(projectPath)'

- task: DotNetCoreCLI@2
  displayName: 'Build'
  inputs:
    command: 'build'
    arguments: '--configuration $(buildConfiguration) -p:Version=$(versionNumber)'
    projects: '$(projectPath)'
    
- script: echo '##vso[task.setvariable variable=versionNumber]$(versionNumber)-alpha'
  displayName: "Set Nuget package version number"
  condition: ne(variables['Build.SourceBranchName'], 'master')

- task: DotNetCoreCLI@2
  displayName: 'Pack'
  inputs:
    command: 'pack'
    packagesToPack: '**/*.csproj'
    versioningScheme: 'byEnvVar'
    versionEnvVar: 'versionNumber'
    outputDir: '$(Build.ArtifactStagingDirectory)'

- task: NuGetAuthenticate@0
  displayName: 'NuGet Authenticate'

- task: NuGetCommand@2
  displayName: 'NuGet push'
  inputs:
    command: push
    nuGetFeedType: 'internal'
    publishVstsFeed: 'xxxxxxxxxxxxxxxxxx'
    allowPackageConflicts: true

Read and remove scheduled message in Azure Service Bus

Ever wondered on how you can remove the scheduled message in Service bus topic or queue? We had a bug where one of our service keep queuing the messages while its not meant to be queued and we deployed the fix but in order to test the fix we need to make sure there are no messages are being scheduled therefore we need to remove all scheduled messages to test if the fix is working

At the same time you can also use the same code to check your messages if it is all being scheduled properly based on your logic

I was expecting the service bus explorer on azure portal allow us to peek into this scheduled messages but unfortunately it doesn’t have this feature

for service bus topic you can use below code

class Program
    {

        // Connection String for the namespace can be obtained from the Azure portal under the
        // 'Shared Access policies' section.
        const string ServiceBusConnectionString = "[Servicebus connection string with entity path]";
        static ITopicClient topicClient;
        static IMessageReceiver messageReceiver;

        static void Main(string[] args)

        {
            MainAsync().GetAwaiter().GetResult();
        }

        static async Task MainAsync()
        {
            var sbConnStringBuilder = new ServiceBusConnectionStringBuilder(ServiceBusConnectionString);
            topicClient = new TopicClient(sbConnStringBuilder);
            Console.WriteLine("======================================================");
            Console.WriteLine("Press any key to exit..");
            Console.WriteLine("======================================================");

            messageReceiver = new MessageReceiver(sbConnStringBuilder);

            Message message = await messageReceiver.PeekAsync();

            // Message with property ScheduledEnqueueTimeUtc not null means it is a scheduled message

            while (message != null)
            {
                if (message != null && message.ScheduledEnqueueTimeUtc != null)
                {
                    // Remove the scheduled message
                    await topicClient.CancelScheduledMessageAsync(message.SystemProperties.SequenceNumber);
                }
                message = await messageReceiver.PeekAsync();
            }

            Console.ReadKey();
            await topicClient.CloseAsync();
        }

    }

For the service bus queue you can use code below

class Program
    {

        // Connection String for the namespace can be obtained from the Azure portal under the
        // 'Shared Access policies' section.
        const string ServiceBusConnectionString = "[Servicebus connection string with entity path]";
        static IQueueClient queueClient;
        static IMessageReceiver messageReceiver;

        static void Main(string[] args)

        {
            MainAsync().GetAwaiter().GetResult();
        }

        static async Task MainAsync()
        {
            var sbConnStringBuilder = new ServiceBusConnectionStringBuilder(ServiceBusConnectionString);
            queueClient = new QueueClient(sbConnStringBuilder);
            Console.WriteLine("======================================================");
            Console.WriteLine("Press any key to exit..");
            Console.WriteLine("======================================================");

            messageReceiver = new MessageReceiver(sbConnStringBuilder);

            Message message = await messageReceiver.PeekAsync();

            // Message with property ScheduledEnqueueTimeUtc not null means it is a scheduled message

            while (message != null)
            {
                if (message != null && message.ScheduledEnqueueTimeUtc != null)
                {
                    // Remove the scheduled message
                    await queueClient.CancelScheduledMessageAsync(message.SystemProperties.SequenceNumber);
                }
                message = await messageReceiver.PeekAsync();
            }

            Console.ReadKey();
            await queueClient.CloseAsync();
        }

Build a Secure Integration Tests with Azure Key vaults in Azure DevOps

Scenario: We have an integration tests written in .NET and its using NUnit, We don’t want to store the API Key and all sensitive informations on the repository instead we want it to retrieve all the keys from azure key vaults. At the same time we also would like the Test Engineer to be able to run it on their local environment

One way to achieve it we can use Test parameters feature from NUnit

Add .runsettings in your project and this file will be used for local development/testing only and should not be checked in with the values, and the format can be something like below. If you want to know more details, you can check it here

<?xml version="1.0" encoding="utf-8" ?>
<RunSettings>
	<TestRunParameters>
		<Parameter name="ApiKey" value="" />
		<Parameter name="RefreshToken" value="" />
	</TestRunParameters>
</RunSettings>

Most importantly, you need to configure your IDE below

  1. Make sure autodetection of runsettings in enabled in Visual Studio by checking this checkbox: Tools > Options > Test > Auto Detect runsettings Files.
  2. Make sure you have created your runsettings file in the root of your solution, not your project root.
  3. If all else fails and your tests still can’t find your .runsettings file, you can specify the file manually in the Test Explorer by selecting Options > Configure Run Settings > Select solution wide Run Settings file.

For Visual Studio on Mac – you need to do below

Add the runsetting file path to the project file and it will do the work.

<Project Sdk=“Microsoft.NET.Sdk”>
<PropertyGroup>
<RunSettingsFilePath>$(MSBuildProjectDirectory)\.runsettings</RunSettingsFilePath>
</PropertyGroup>
…
</Project>

In your test class, you can retrieve the test parameters through TestContext.Parameters

[TestFixture]
    public class MyTests
    {
        private readonly string _apiKey;
        private readonly string _refreshToken;

        [SetUp]
        public async Task PopulateConfigs()
        {
            _apiKey = TestContext.Parameters["ApiKey"];
            _refreshToken = TestContext.Parameters["RefreshToken"];

        }
}

On the Azure Pipelines Yml file, this is how you retrieve it from the keyvaults and inject the TestRun Parameters as arguments

pool:
  vmImage: ubuntu-latest

trigger: none
pr: none
schedules:
- cron: "0 20 * * Sun,Mon,Tue,Wed,Thu"
  displayName: Daily morning build
  branches:
    include:
    - master
  always: true

variables:
  - name: dotnetVersion
    value: '7.0.x'

stages:
- stage:
  displayName: Run e2e .NET tests
  jobs:
  - job:
    displayName: build job
    steps:
    - task: UseDotNet@2
      displayName: Use dotnet $(dotnetVersion)
      inputs:
        packageType: sdk
        version: $(dotnetVersion)
    - task: DotNetCoreCLI@2
      displayName: dotnet restore
      inputs:
        command: 'restore'
    - task: DotNetCoreCLI@2
      displayName: 'dotnet build'
      inputs:
        command: 'build'
    - task: AzureKeyVault@2
      inputs:
        azureSubscription: 'My Service Principal'
        KeyVaultName: 'my-keyvault-dev'
        SecretsFilter: '*'
        RunAsPreJob: false
    - task: DotNetCoreCLI@2
      displayName: 'dotnet test'
      inputs:
        command: 'test'
        arguments: '-- "TestRunParameters.Parameter(name=\"ApiKey\", value=\"$(ApiKey)\")" -- "TestRunParameters.Parameter(name=\"RefreshToken\", value=\"$(RefreshToken)\")"'


$(ApiKey) and $(RefreshToken) is mapped with your Azure Keyvault secrets name

How Fear Based Leaders Destroy Employee Morale and Performance

Fear is a powerful emotion that can motivate us to act or paralyze us from taking action. In the workplace, some leaders may use fear as a tool to influence their employees’ attitudes, values, or behaviors. However, this approach can have negative consequences for both the leaders and their teams. In this article, we will explore how fear-based leadership can destroy employee morale and performance, and what leaders can do instead to create a culture of psychological safety and empowerment.

I have learned of some instances where, upon receiving a resignation letter from an employee in my previous organization, the manager tried to dissuade them from leaving by saying “Don’t resign or else you will regret it” and citing examples of former employees who faced difficulties in their new jobs. I find this to be a very unprofessional and unethical tactic by the manager. A true leader would be supportive of their team member’s career aspirations and wish them well for their future endeavors. They would also recognize that the employee might have the potential to start their own successful business someday or be a successful leader.

What is fear-based leadership?

Fear-based leadership is a style of management that relies on threats, punishments, intimidation, or coercion to achieve desired outcomes. Fear-based leaders may use various tactics to instill fear in their employees, such as:

  • Setting unrealistic expectations and deadlines
  • Micromanaging and controlling every aspect of work
  • Criticizing and blaming employees for mistakes
  • Withholding praise and recognition
  • Creating a competitive and hostile work environment
  • Ignoring or dismissing employees’ opinions and feedback
  • Threatening employees with job loss, demotion, or pay cuts

Fear-based leaders may believe that fear is an effective motivator that can drive performance and productivity. They may also think that fear can help them maintain authority and control over their teams. However, research shows that fear-based leadership has many negative effects on both individuals and organizations.

The effects of fear-based leadership

Fear-based leadership can have detrimental impacts on employee morale and performance in various ways:

  • It demoralizes people: Fear-based leadership creates a power imbalance that erodes trust,
    respect, and dignity among employees. Employees may feel insecure, anxious, depressed,
    or hopeless about their work situation. They may also lose their sense of purpose and meaning in their work.
  • It creates a breeding ground for resentment: Some people may react with anger, frustration, or defiance to fear-based leadership. They may resent their leader for treating them unfairly or disrespectfully. They may also harbor negative feelings toward their colleagues who comply with or support the leader’s actions.
  • It impedes communication: Fear-based leadership discourages open and honest communication among employees.
    Employees may be afraid to speak up or share their ideas for fear of being ridiculed or punished by their leader. They may also avoid giving feedback or asking for help from their peers for fear of being seen as weak or incompetent. This leads to poor collaboration and information sharing within teams.
  • It inhibits innovation: Fear-based leadership stifles creativity and learning among employees. Employees may be reluctant to try new things or experiment with different solutions for fear of making mistakes or failing. They may also resist change or feedback for fear of losing their status quo or comfort zone. This hinders innovation and improvement within organizations.
  • It reduces engagement: Fear-based leadership lowers employee engagement levels. Employees may feel detached from their work goals and outcomes. They may also feel less motivated to perform well or go beyond expectations. They may only do the minimum required work to avoid negative consequences from their leader. This affects productivity and quality within organizations.

What leaders can do instead

Instead of using fear as a motivational tool for employees, leaders should create a culture of psychological safety
and empowerment within organizations. Psychological safety is “a shared belief held by members of a team that the team is safe for interpersonal risk taking”.

It means that employees feel comfortable expressing themselves without fearing negative repercussions from others.

Empowerment is “the process of enhancing feelings of self-efficacy among organizational members through identification with organizational goals”. It means that employees feel confident in their abilities and have autonomy over their work decisions.

Leaders who foster psychological safety and empowerment among employees can benefit from:

  • Higher trust: Employees trust their leaders who treat them with respect, care, and fairness.
    They also trust each other who support them, listen to them, and collaborate with them. Trust enhances teamwork,
    cooperation, and loyalty within organizations.
  • Higher morale: Employees feel valued, appreciated, and recognized by their leaders who praise them, reward them,

Power Apps – Mount a SQL Server table as an entity in Dataverse

Business Case: We have a need where we have an existing database from a legacy app and we really enjoyed how easy and fast it is to use Power Apps (Model Driven app) to access entities in dataverse. So can we use mount an external table to Dataverse and the answer is yes, it is possible and it is straight forward

Add a SQL Server Connection in Power Apps – “Authentication Type : SQL Server Authentication” for this POC. But I think the best practice is to use Service Principal (Azure AD Application)

On your SQL Server side (in my case i am using Azure), you need to whitelist the Power Platform IP Addresses – You can get the list of IP Addresses from here – Managed connectors outbound IP addresses

On the Power Apps – Go to Tables and Select New Table then select “New table from external data

Select the connection – SQL Server that we created before and then select the SQL Server table that you want to mount

Once all done, now you can see all the records in SQL server as virtual entity in your dataverse

Use Power Shell to execute SQL Server script files

Below is the snippet for using Power Shell to execute list of SQL scripts under a specific folder. I use this script in Octopus deployment to deploy database changes (* for this particular case, we don’t use Code First therefore we don’t use Migrate.exe)

# Connection String
[string] $ConnectionString = "server=MyDBServer;database=MyDatabase;user id=Myuser;password=Mypassword;trusted_connection=true;"

#The folder where all the sql scripts are located
[string] $ScriptPath= "C:\Octopus\Applications\SQL2014UAT\Powershell Deployment\Scripts"

# Go to every single SQL files under the folder
foreach ($sqlFile in Get-ChildItem -Path $ScriptPath -Filter "*.sql" | sort-object)
{
    $SQLQuery = Get-Content "$ScriptPath\$sqlFile" -Raw
    
    ExecuteSqlQuery $ConnectionString $SQLQuery 
}

# executes multiple lines of SQL query
function ExecuteSqlQuery ($ConnectionString, $SQLQuery) {
    # Use GO to separate between commands
    $queries = [System.Text.RegularExpressions.Regex]::Split($SQLQuery, "^\s*GO\s*`$", [System.Text.RegularExpressions.RegexOptions]::IgnoreCase -bor [System.Text.RegularExpressions.RegexOptions]::Multiline)

    $queries | ForEach-Object {
        $q = $_
        
        if ((-not [String]::IsNullOrWhiteSpace($q)) -and ($q.Trim().ToLowerInvariant() -ne "go")) 
        {            
            $Connection = New-Object System.Data.SQLClient.SQLConnection

            Try
            {
                $Connection.ConnectionString = $ConnectionString
                $Connection.Open()

                $Command = New-Object System.Data.SQLClient.SQLCommand
                $Command.Connection = $Connection
                $Command.CommandText = $q
                $Command.ExecuteNonQuery() | Out-Null
            }
            Catch
            {
                echo $_.Exception.GetType().FullName, $_.Exception.Message
            }
            Finally
            {
                if ($Connection.State -eq 'Open')
                {
                    write-Host "Closing Connection..." 
                    $Command.Dispose()
                    $Connection.Close()        
                }  
            }
        }
    }
  }

Alternatively if you have SMO, PS extensions and the snap in then you can use simpler script below. For the pre-requisites for invoke-sqlcmd is here

Get-ChildItem -Path "C:\Octopus\Applications\SQL2014UAT\Powershell Deployment\Scripts" -Filter "*.sql" | % {invoke-sqlcmd -InputFile $_.FullName}

Reading multiple lines using Powershell

by default, Get-Content in powershell reading the file as a one long string. which means you might have an issue if you have multiple lines of SQL statements as it will read this

IF EXISTS(SELECT 1 FROM sys.procedures WHERE NAME = ‘PSTest’)
BEGIN
DROP PROCEDURE PsTest
END
GO

becoming

IF EXISTS(SELECT 1 FROM sys.procedures WHERE NAME = ‘PSTest’) BEGIN DROP PROCEDURE PsTest END GO

so how to read multiple lines using Power Shell?

Before Power Shell 3.0 you can use the code snippet below

(Get-Content $FilePath) -join "`r`n" 

Power Shell 3.0 and above, you can use -Raw parameter

Get-Content $FilePath -Raw

Logging in .NET – Elastic Search, Kibana and Serilog

I’ve been using log4net in the past and I found it quite useful as it is ready to use out of the box. In my last workplace, we are using SPLUNK and its amazing as I’m able to troubleshoot production issue by looking at the trend and activities. You can do query based and filtering the log and build a pretty dashboard. Downside of it is the cost for Splunk is expensive (i don’t think its for the mainstream user or small business)

So I’ve found another logging mechanism/storage/tool which is amazing!! It is called Elastic Search and its open source (well there are different subscriptions level for better supports). Pretty much Elastic Search is the engine for search and analytics

How about the GUI/Dashboard?Yes you can use Kibana. It is an open source data visualization platform that allows you to interact with data

Ok, so lets say if I have a .NET, how do I write my log to Elastic Search?You can use SeriLog. It will allow you to log structured event data to your log and you can use Serilog Elastic Search sink to integrate with Elastic Search

Serilog has different sink providers that allow you to store your log externally (beside file) including SPLUNK

I will talk more about Serilog separately in different post, stay tune!