Build a Secure Integration Tests with Azure Key vaults in Azure DevOps

Scenario: We have an integration tests written in .NET and its using NUnit, We don’t want to store the API Key and all sensitive informations on the repository instead we want it to retrieve all the keys from azure key vaults. At the same time we also would like the Test Engineer to be able to run it on their local environment

One way to achieve it we can use Test parameters feature from NUnit

Add .runsettings in your project and this file will be used for local development/testing only and should not be checked in with the values, and the format can be something like below. If you want to know more details, you can check it here

<?xml version="1.0" encoding="utf-8" ?>
<RunSettings>
	<TestRunParameters>
		<Parameter name="ApiKey" value="" />
		<Parameter name="RefreshToken" value="" />
	</TestRunParameters>
</RunSettings>

Most importantly, you need to configure your IDE below

  1. Make sure autodetection of runsettings in enabled in Visual Studio by checking this checkbox: Tools > Options > Test > Auto Detect runsettings Files.
  2. Make sure you have created your runsettings file in the root of your solution, not your project root.
  3. If all else fails and your tests still can’t find your .runsettings file, you can specify the file manually in the Test Explorer by selecting Options > Configure Run Settings > Select solution wide Run Settings file.

For Visual Studio on Mac – you need to do below

Add the runsetting file path to the project file and it will do the work.

<Project Sdk=“Microsoft.NET.Sdk”>
<PropertyGroup>
<RunSettingsFilePath>$(MSBuildProjectDirectory)\.runsettings</RunSettingsFilePath>
</PropertyGroup>
…
</Project>

In your test class, you can retrieve the test parameters through TestContext.Parameters

[TestFixture]
    public class MyTests
    {
        private readonly string _apiKey;
        private readonly string _refreshToken;

        [SetUp]
        public async Task PopulateConfigs()
        {
            _apiKey = TestContext.Parameters["ApiKey"];
            _refreshToken = TestContext.Parameters["RefreshToken"];

        }
}

On the Azure Pipelines Yml file, this is how you retrieve it from the keyvaults and inject the TestRun Parameters as arguments

pool:
  vmImage: ubuntu-latest

trigger: none
pr: none
schedules:
- cron: "0 20 * * Sun,Mon,Tue,Wed,Thu"
  displayName: Daily morning build
  branches:
    include:
    - master
  always: true

variables:
  - name: dotnetVersion
    value: '7.0.x'

stages:
- stage:
  displayName: Run e2e .NET tests
  jobs:
  - job:
    displayName: build job
    steps:
    - task: UseDotNet@2
      displayName: Use dotnet $(dotnetVersion)
      inputs:
        packageType: sdk
        version: $(dotnetVersion)
    - task: DotNetCoreCLI@2
      displayName: dotnet restore
      inputs:
        command: 'restore'
    - task: DotNetCoreCLI@2
      displayName: 'dotnet build'
      inputs:
        command: 'build'
    - task: AzureKeyVault@2
      inputs:
        azureSubscription: 'My Service Principal'
        KeyVaultName: 'my-keyvault-dev'
        SecretsFilter: '*'
        RunAsPreJob: false
    - task: DotNetCoreCLI@2
      displayName: 'dotnet test'
      inputs:
        command: 'test'
        arguments: '-- "TestRunParameters.Parameter(name=\"ApiKey\", value=\"$(ApiKey)\")" -- "TestRunParameters.Parameter(name=\"RefreshToken\", value=\"$(RefreshToken)\")"'


$(ApiKey) and $(RefreshToken) is mapped with your Azure Keyvault secrets name

How Fear Based Leaders Destroy Employee Morale and Performance

Fear is a powerful emotion that can motivate us to act or paralyze us from taking action. In the workplace, some leaders may use fear as a tool to influence their employees’ attitudes, values, or behaviors. However, this approach can have negative consequences for both the leaders and their teams. In this article, we will explore how fear-based leadership can destroy employee morale and performance, and what leaders can do instead to create a culture of psychological safety and empowerment.

I have learned of some instances where, upon receiving a resignation letter from an employee in my previous organization, the manager tried to dissuade them from leaving by saying “Don’t resign or else you will regret it” and citing examples of former employees who faced difficulties in their new jobs. I find this to be a very unprofessional and unethical tactic by the manager. A true leader would be supportive of their team member’s career aspirations and wish them well for their future endeavors. They would also recognize that the employee might have the potential to start their own successful business someday or be a successful leader.

What is fear-based leadership?

Fear-based leadership is a style of management that relies on threats, punishments, intimidation, or coercion to achieve desired outcomes. Fear-based leaders may use various tactics to instill fear in their employees, such as:

  • Setting unrealistic expectations and deadlines
  • Micromanaging and controlling every aspect of work
  • Criticizing and blaming employees for mistakes
  • Withholding praise and recognition
  • Creating a competitive and hostile work environment
  • Ignoring or dismissing employees’ opinions and feedback
  • Threatening employees with job loss, demotion, or pay cuts

Fear-based leaders may believe that fear is an effective motivator that can drive performance and productivity. They may also think that fear can help them maintain authority and control over their teams. However, research shows that fear-based leadership has many negative effects on both individuals and organizations.

The effects of fear-based leadership

Fear-based leadership can have detrimental impacts on employee morale and performance in various ways:

  • It demoralizes people: Fear-based leadership creates a power imbalance that erodes trust,
    respect, and dignity among employees. Employees may feel insecure, anxious, depressed,
    or hopeless about their work situation. They may also lose their sense of purpose and meaning in their work.
  • It creates a breeding ground for resentment: Some people may react with anger, frustration, or defiance to fear-based leadership. They may resent their leader for treating them unfairly or disrespectfully. They may also harbor negative feelings toward their colleagues who comply with or support the leader’s actions.
  • It impedes communication: Fear-based leadership discourages open and honest communication among employees.
    Employees may be afraid to speak up or share their ideas for fear of being ridiculed or punished by their leader. They may also avoid giving feedback or asking for help from their peers for fear of being seen as weak or incompetent. This leads to poor collaboration and information sharing within teams.
  • It inhibits innovation: Fear-based leadership stifles creativity and learning among employees. Employees may be reluctant to try new things or experiment with different solutions for fear of making mistakes or failing. They may also resist change or feedback for fear of losing their status quo or comfort zone. This hinders innovation and improvement within organizations.
  • It reduces engagement: Fear-based leadership lowers employee engagement levels. Employees may feel detached from their work goals and outcomes. They may also feel less motivated to perform well or go beyond expectations. They may only do the minimum required work to avoid negative consequences from their leader. This affects productivity and quality within organizations.

What leaders can do instead

Instead of using fear as a motivational tool for employees, leaders should create a culture of psychological safety
and empowerment within organizations. Psychological safety is “a shared belief held by members of a team that the team is safe for interpersonal risk taking”.

It means that employees feel comfortable expressing themselves without fearing negative repercussions from others.

Empowerment is “the process of enhancing feelings of self-efficacy among organizational members through identification with organizational goals”. It means that employees feel confident in their abilities and have autonomy over their work decisions.

Leaders who foster psychological safety and empowerment among employees can benefit from:

  • Higher trust: Employees trust their leaders who treat them with respect, care, and fairness.
    They also trust each other who support them, listen to them, and collaborate with them. Trust enhances teamwork,
    cooperation, and loyalty within organizations.
  • Higher morale: Employees feel valued, appreciated, and recognized by their leaders who praise them, reward them,

Power Apps – Mount a SQL Server table as an entity in Dataverse

Business Case: We have a need where we have an existing database from a legacy app and we really enjoyed how easy and fast it is to use Power Apps (Model Driven app) to access entities in dataverse. So can we use mount an external table to Dataverse and the answer is yes, it is possible and it is straight forward

Add a SQL Server Connection in Power Apps – “Authentication Type : SQL Server Authentication” for this POC. But I think the best practice is to use Service Principal (Azure AD Application)

On your SQL Server side (in my case i am using Azure), you need to whitelist the Power Platform IP Addresses – You can get the list of IP Addresses from here – Managed connectors outbound IP addresses

On the Power Apps – Go to Tables and Select New Table then select “New table from external data

Select the connection – SQL Server that we created before and then select the SQL Server table that you want to mount

Once all done, now you can see all the records in SQL server as virtual entity in your dataverse

Use Power Shell to execute SQL Server script files

Below is the snippet for using Power Shell to execute list of SQL scripts under a specific folder. I use this script in Octopus deployment to deploy database changes (* for this particular case, we don’t use Code First therefore we don’t use Migrate.exe)

# Connection String
[string] $ConnectionString = "server=MyDBServer;database=MyDatabase;user id=Myuser;password=Mypassword;trusted_connection=true;"

#The folder where all the sql scripts are located
[string] $ScriptPath= "C:\Octopus\Applications\SQL2014UAT\Powershell Deployment\Scripts"

# Go to every single SQL files under the folder
foreach ($sqlFile in Get-ChildItem -Path $ScriptPath -Filter "*.sql" | sort-object)
{
    $SQLQuery = Get-Content "$ScriptPath\$sqlFile" -Raw
    
    ExecuteSqlQuery $ConnectionString $SQLQuery 
}

# executes multiple lines of SQL query
function ExecuteSqlQuery ($ConnectionString, $SQLQuery) {
    # Use GO to separate between commands
    $queries = [System.Text.RegularExpressions.Regex]::Split($SQLQuery, "^\s*GO\s*`$", [System.Text.RegularExpressions.RegexOptions]::IgnoreCase -bor [System.Text.RegularExpressions.RegexOptions]::Multiline)

    $queries | ForEach-Object {
        $q = $_
        
        if ((-not [String]::IsNullOrWhiteSpace($q)) -and ($q.Trim().ToLowerInvariant() -ne "go")) 
        {            
            $Connection = New-Object System.Data.SQLClient.SQLConnection

            Try
            {
                $Connection.ConnectionString = $ConnectionString
                $Connection.Open()

                $Command = New-Object System.Data.SQLClient.SQLCommand
                $Command.Connection = $Connection
                $Command.CommandText = $q
                $Command.ExecuteNonQuery() | Out-Null
            }
            Catch
            {
                echo $_.Exception.GetType().FullName, $_.Exception.Message
            }
            Finally
            {
                if ($Connection.State -eq 'Open')
                {
                    write-Host "Closing Connection..." 
                    $Command.Dispose()
                    $Connection.Close()        
                }  
            }
        }
    }
  }

Alternatively if you have SMO, PS extensions and the snap in then you can use simpler script below. For the pre-requisites for invoke-sqlcmd is here

Get-ChildItem -Path "C:\Octopus\Applications\SQL2014UAT\Powershell Deployment\Scripts" -Filter "*.sql" | % {invoke-sqlcmd -InputFile $_.FullName}

Reading multiple lines using Powershell

by default, Get-Content in powershell reading the file as a one long string. which means you might have an issue if you have multiple lines of SQL statements as it will read this

IF EXISTS(SELECT 1 FROM sys.procedures WHERE NAME = ‘PSTest’)
BEGIN
DROP PROCEDURE PsTest
END
GO

becoming

IF EXISTS(SELECT 1 FROM sys.procedures WHERE NAME = ‘PSTest’) BEGIN DROP PROCEDURE PsTest END GO

so how to read multiple lines using Power Shell?

Before Power Shell 3.0 you can use the code snippet below

(Get-Content $FilePath) -join "`r`n" 

Power Shell 3.0 and above, you can use -Raw parameter

Get-Content $FilePath -Raw

Logging in .NET – Elastic Search, Kibana and Serilog

I’ve been using log4net in the past and I found it quite useful as it is ready to use out of the box. In my last workplace, we are using SPLUNK and its amazing as I’m able to troubleshoot production issue by looking at the trend and activities. You can do query based and filtering the log and build a pretty dashboard. Downside of it is the cost for Splunk is expensive (i don’t think its for the mainstream user or small business)

So I’ve found another logging mechanism/storage/tool which is amazing!! It is called Elastic Search and its open source (well there are different subscriptions level for better supports). Pretty much Elastic Search is the engine for search and analytics

How about the GUI/Dashboard?Yes you can use Kibana. It is an open source data visualization platform that allows you to interact with data

Ok, so lets say if I have a .NET, how do I write my log to Elastic Search?You can use SeriLog. It will allow you to log structured event data to your log and you can use Serilog Elastic Search sink to integrate with Elastic Search

Serilog has different sink providers that allow you to store your log externally (beside file) including SPLUNK

I will talk more about Serilog separately in different post, stay tune!

Git – Prune your local branches to keep it in sync with remote branches

On your local branches normally you have a stale branch where it doesn’t have corresponding remote branch and you feel like you want to make it in sync with the remote branches

1. Lets start listing the remote branch first just to know what are the branches available remotely

$ git remote show origin

2. Lets see our local stale branches “–dry-run” command will just display the stale branch but without deleting it

$ git remote prune origin --dry-run

3. Alternatively if you want to really delete the stale branches you can run it without “–dry-run” command

$ git remote prune origin

*just make sure you already committed your feature branch to the remote before doing this

Dynamic Deserialization using JsonConverter

This post is the continuation from the previous post. The previous post was in regard to casting the object dynamically. This post will explain how to deserialize dynamically from Json object

I have a json that I want to deserialize dynamically based on a specific property that defines what object it is. We can do it easily and elegantly by using JsonConverter

1. Create a custom JsonConverter

public class MessageConverter : JsonConverter
    {
        static readonly JsonSerializerSettings SpecifiedSubclassConversion = new JsonSerializerSettings() { ContractResolver = new CamelCasePropertyNamesContractResolver() };

        public override bool CanConvert(Type objectType)
        {
            return (objectType == typeof(WhafflMetadata));
        }

        public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
        {
            var jObject = JObject.Load(reader);

            switch (jObject["messageType"].Value<string>())
            {
                case "STAFF_CREATED":
                    return JsonConvert.DeserializeObject<Message<StaffDetail>>(jObject.ToString(), SpecifiedSubclassConversion);

                case "CITY_CREATED":
                    return JsonConvert.DeserializeObject<Message<City>>(jObject.ToString(), SpecifiedSubclassConversion);

                default:
                    throw new Exception(string.Format("messageType {0} cannot be handled", jObject["messageType"].Value<string>()));
            }
        }

        public override bool CanWrite
        {
            get { return false; }
        }

        public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
        {
            throw new NotImplementedException();
        }
    }

2. Cast it using your converter

JsonConvert.DeserializeObject<Message>(
                jsonValue,
                new JsonSerializerSettings { Converters = new JsonConverter[] { new MessageConverter() } });

Automapper – Dynamic and Generic Mapping

In Automapper, we normally have 1 to 1 mapping defined but I have a case whereas the incoming stream as a json payload which then I cast it as a dynamic (using JObject parse) and in one of the property within the payload it defined which object that it needs to cast into. Lets take a look at the sample below

Input
Json payload to create a city

{
    "requestId": "C4910016-C30D-415C-89D3-D08D724429A6",
    "messageType": "CITY_CREATED",
    "categoryName": "categoryA",
    "metadata": {
        "city": "sydney",
        "state": "NSW",
        "postcode": "2000",
        "country": "australia"
    }
}

at the same time we can also have a Json payload to create a staff

{
  "requestId":"C4910016-C30D-415C-89D3-D08D724429A6",
  "messageType": "STAFF_CREATED",
  "categoryName": "categoryB",
  "staffDetail": {
    "name": "fransiscus",
    "dateOfBirth": "01/01/1950"
  },
  "location" : {
    "cityId" : "1"
  }
}

So what we are doing in here, all the message will go into payload property (it can contain any object) and we add some extra information/header/metadata on the parent level
Desired Outputs

{
    "messageType": "CITY_CREATED",
    "payload": {
        "city": "sydney",
        "state": "NSW",
        "postcode": "2000",
        "country": "australia"
    },
    "provider": "abc",
    "providerRequestId": "C4910016-C30D-415C-89D3-D08D724429A6",
    "receivedAt": "2015-09-30T23:53:58.6118521Z",
    "lastUpdated": "2015-09-30T23:53:58.6128283Z",
    "lastUpdater": "Transformer",
    "attempt": 0
}
{
    "messageType": "STAFF_CREATED",
    "payload": {
        "staffName": "fransiscus",
        "dateOfBirth": "01/01/1950",
        "cityId": "1"
    },
    "provider": "abc",
    "providerRequestId": "C4910016-C30D-415C-89D3-D08D724429A6",
    "receivedAt": "2015-09-30T23:53:58.6118521Z",
    "lastUpdated": "2015-09-30T23:53:58.6128283Z",
    "lastUpdater": "Transformer",
    "attempt": 0
}

To map this to a concrete class 1:1 mapping is straight forward and easy. The problem here is that the “messageType” is the one that decided which object that it should be

Automapper Configuration:

1. POCO object

abstract class that stores all the metadata

public abstract class Metadata
    {
        public string MessageType { get; set; }

        public string Provider { get; set; }

        public string ProviderRequestId { get; set; }

        public DateTime ReceivedAt { get; set; }

        public DateTime LastUpdated { get; set; }

        public string LastUpdater { get; set; }

        public int Attempt { get; set; }

        public List<string> Errors { get; set; } 
    }
 public class City
    {
        public string CityName { get; set; }
        public string State { get; set; }
        public string PostCode { get; set; }
        public string Country { get; set; }
    }
public class StaffDetail 
    {
        public string Name { get; set; }
        public string DateOfBirth { get; set; }
        public int CityId { get; set; }
    }
public class Message<T> : Metadata where T : class
    {
        public T Payload { get; set; }
    }

2. Lets create a TypeConverter for the base class which is Metadata and from this converter it will return the derived class

public class MetadataTypeConverter : TypeConverter<dynamic, Metadata>
    {
protected override Metadata ConvertCore(dynamic source)
        {
            Metadata metadata;

            var type = (string)source.messageType.Value;

            switch (type)
            {
                case "STAFF_CREATED":
                    metadata = new Message<StaffDetail> { Payload = Mapper.Map<dynamic, StaffDetail>(source) };
                    break;
                case "CITY_CREATED":
                    metadata = new Message<City> { Payload = Mapper.Map<dynamic, City>(source) };
                    break;

                default: throw new Exception(string.Format("no mapping defined for {0}", source.messageType.Value));
            }

            metadata.ProviderRequestId = source.requestId;
            metadata.Topic = string.Format("{0}.{1}.pregame", 
                producerTopicName,
                source.categoryName ?? source.competition.categoryName);
            metadata.Provider = "My Provider";
            metadata.MessageType = source.messageType;
            metadata.ReceivedAt = DateTime.UtcNow;
            metadata.LastUpdated = DateTime.UtcNow;
            metadata.LastUpdater = "Transformer";
            metadata.Attempt = 0;

            return metadata;
        }
    }

3. Lets create a TypeConverter for the derived class which are Staff and City

public class CityTypeConverter : TypeConverter<dynamic, City>
    {
        protected override City ConvertCore(dynamic source)
        {
            City city = new City();
            city.CityName = source.metadata.city;
            city.State = source.metadata.state;
            city.Postcode = source.metadata.postcode;
            city.Country = source.metadata.country;

            return city;
        }
    }
 public class StaffDetailTypeConverter : TypeConverter<dynamic, StaffDetail>
    {
        protected override StaffDetail ConvertCore(dynamic source)
        {
            StaffDetail staffdetail = new StaffDetail();
            staffdetail.Name = source.staffDetail.name;
            staffdetail.DateOfBirth = source.staffDetail.dateOfBirth;
            staffdetail.CityId = source.location.cityId;

            return staffdetail;
        }
    }

3. Define the Automapper mapping in the configuration

public class WhafflMessageMapping : Profile
    {
        public override string ProfileName
        {
            get
            {
                return this.GetType().Name;
            }
        }

        protected override void Configure()
        {
            this.CreateMap()
                .ConvertUsing(new MetadataTypeConverter());

            this.CreateMap()
                .ConvertUsing(new StaffDetailTypeConverter());

            this.CreateMap()
                .ConvertUsing(new CityTypeConverter());
        }

        private Metadata BuildWhafflMessage(dynamic context)
        {
            var type = ((string)context.messageType.Value);

            switch (type)
            {
                case "STAFF_CREATED":
                    return new Message { Payload = Mapper.Map(context) };
                case "CITY_CREATED:
                    return new Message { Payload = Mapper.Map(context) };

                default: throw new Exception(string.Format("no mapping defined for {0}", context.messageType.Value));
            }

        }
    }

401 Unauthorized – WebRequest

I got this nasty 401 unauthorized error from my code all of sudden, I don’t really know why and what’s causing it. I used Fiddler as a proxy to see the request header and all of sudden it works but then removing the proxy again brings back the 401!!!

So after googling for a while i found something interesting about browser was requesting for authentication level etc therefore even if you pass the basic security header then it will just simply ignore it. So I played around with the code below and it fixes my issue

WebRequest request = WebRequest.Create(source);
request.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;