Technical Insights: Azure, .NET, Dynamics 365 & EV Charging Architecture

Author: fransiscuss Page 2 of 19

Microsoft Azure Service Bus Exception: “Cannot allocate more handles. The maximum number of handles is 4999”

When working with Microsoft Azure Service Bus, you may encounter the following exception:

“Cannot allocate more handles. The maximum number of handles is 4999.”

This issue typically arises due to improper dependency injection scope configuration for the ServiceBusClient. In most cases, the ServiceBusClient is registered as Scoped instead of Singleton, leading to the creation of multiple instances during the application lifetime, which exhausts the available handles.

In this blog post, we’ll explore the root cause and demonstrate how to fix this issue by using proper dependency injection in .NET applications.

Understanding the Problem

Scoped vs. Singleton

  1. Scoped: A new instance of the service is created per request.
  2. Singleton: A single instance of the service is shared across the entire application lifetime.

The ServiceBusClient is designed to be a heavyweight object that maintains connections and manages resources efficiently. Hence, it should be registered as a Singleton to avoid excessive resource allocation and ensure optimal performance.

Before Fix: Using Scoped Registration

Here’s an example of the problematic configuration:

public void ConfigureServices(IServiceCollection services)
{
    services.AddScoped(serviceProvider =>
    {
        string connectionString = Configuration.GetConnectionString("ServiceBus");
        return new ServiceBusClient(connectionString);
    });

    services.AddScoped<IMessageProcessor, MessageProcessor>();
}

In this configuration:

  • A new instance of ServiceBusClient is created for each HTTP request or scoped context.
  • This quickly leads to resource exhaustion, causing the “Cannot allocate more handles” error.

Solution: Switching to Singleton

To fix this, register the ServiceBusClient as a Singleton, ensuring a single instance is shared across the application lifetime:

public void ConfigureServices(IServiceCollection services)
{
    services.AddSingleton(serviceProvider =>
    {
        string connectionString = Configuration.GetConnectionString("ServiceBus");
        return new ServiceBusClient(connectionString);
    });

    services.AddScoped<IMessageProcessor, MessageProcessor>();
}

In this configuration:

  • A single instance of ServiceBusClient is created and reused for all requests.
  • Resource usage is optimized, and the exception is avoided.

Sample Code: Before and After

Before Fix (Scoped Registration)

public interface IMessageProcessor
{
    Task ProcessMessageAsync();
}

public class MessageProcessor : IMessageProcessor
{
    private readonly ServiceBusClient _client;

    public MessageProcessor(ServiceBusClient client)
    {
        _client = client;
    }

    public async Task ProcessMessageAsync()
    {
        ServiceBusReceiver receiver = _client.CreateReceiver("queue-name");
        var message = await receiver.ReceiveMessageAsync();
        Console.WriteLine($"Received message: {message.Body}");
        await receiver.CompleteMessageAsync(message);
    }
}

After Fix (Singleton Registration)

public void ConfigureServices(IServiceCollection services)
{
    // Singleton registration for ServiceBusClient
    services.AddSingleton(serviceProvider =>
    {
        string connectionString = Configuration.GetConnectionString("ServiceBus");
        return new ServiceBusClient(connectionString);
    });

    services.AddScoped<IMessageProcessor, MessageProcessor>();
}

public class MessageProcessor : IMessageProcessor
{
    private readonly ServiceBusClient _client;

    public MessageProcessor(ServiceBusClient client)
    {
        _client = client;
    }

    public async Task ProcessMessageAsync()
    {
        ServiceBusReceiver receiver = _client.CreateReceiver("queue-name");
        var message = await receiver.ReceiveMessageAsync();
        Console.WriteLine($"Received message: {message.Body}");
        await receiver.CompleteMessageAsync(message);
    }
}

Key Takeaways

  1. Always use Singleton scope for ServiceBusClient to optimize resource usage.
  2. Avoid using Scoped or Transient scope for long-lived, resource-heavy objects.
  3. Test your application under load to ensure no resource leakage occurs.

Resolving the “Certificate Chain Was Issued by an Authority That Is Not Trusted” Error During Sitecore Installation on Windows 11

When installing Sitecore on Windows 11, you might encounter the following error:

A connection was successfully established with the server, but then an error occurred during the login process. (provider: SSL Provider, error: 0 - The certificate chain was issued by an authority that is not trusted.)

This issue arises due to a recent security enforcement rolled out by Microsoft. Windows 11 now requires SQL Server connections to use encrypted connections by default. Some of the PowerShell scripts used during the Sitecore installation process are not configured to handle this change, resulting in the above error.

In this blog post, we’ll dive into the root cause of the issue and walk you through the steps to resolve it.


Understanding the Root Cause

The error is triggered because the PowerShell scripts used in the Sitecore installation attempt to connect to the SQL Server without explicitly trusting the server’s SSL certificate. With the new security enforcement, connections to the SQL Server default to encryption, but without a trusted certificate, the connection fails.

This is particularly relevant when using self-signed certificates or development environments where the SQL Server’s certificate authority is not inherently trusted.

How to Fix the Error

The solution is to explicitly configure the Sitecore installation scripts to trust the SQL Server’s certificate by setting the TrustServerCertificate variable to true. This needs to be done in two specific JSON files used during the installation process:

  1. sitecore-xp0.json
  2. xconnect-xp0.json

Steps to Resolve

  1. Locate the JSON Files:
    • Navigate to the folder where you extracted the Sitecore installation files.
    • Open the ConfigurationFiles directory (or equivalent, depending on your setup).
    • Find the sitecore-xp0.json and xconnect-xp0.json files.
  2. Modify the JSON Files:
    • Open sitecore-xp0.json in a text editor (e.g., Visual Studio Code or Notepad++).
    • Look for [variable('Sql.Credential')] in the JSON structure.
    • Add the following key-value pair:"TrustServerCertificate": true
    • Example:
"CreateShardApplicationDatabaseServerLoginInvokeSqlCmd": {
    "Description": "Create Collection Shard Database Server Login.",
    "Type": "InvokeSqlcmd",
    "Params": {
        "ServerInstance": "[parameter('SqlServer')]",
        "Credential": "[variable('Sql.Credential')]",
        "TrustServerCertificate": true,
        "InputFile": "[variable('Sharding.SqlCmd.Path.CreateShardApplicationDatabaseServerLogin')]",
        "Variable": [
            "[concat('UserName=',variable('SqlCollection.User'))]",
            "[concat('Password=',variable('SqlCollection.Password'))]"
        ]
    },
    "Skip": "[or(parameter('SkipDatabaseInstallation'),parameter('Update'))]"
},
"CreateShardManagerApplicationDatabaseUserInvokeSqlCmd": {
    "Description": "Create Collection Shard Manager Database User.",
    "Type": "InvokeSqlcmd",
    "Params": {
        "ServerInstance": "[parameter('SqlServer')]",
        "Credential": "[variable('Sql.Credential')]",
        "TrustServerCertificate": true,
        "Database": "[variable('Sql.Database.ShardMapManager')]",
        "InputFile": "[variable('Sharding.SqlCmd.Path.CreateShardManagerApplicationDatabaseUser')]",
        "Variable": [
            "[concat('UserName=',variable('SqlCollection.User'))]",
            "[concat('Password=',variable('SqlCollection.Password'))]"
        ]
    },
    "Skip": "[or(parameter('SkipDatabaseInstallation'),parameter('Update'))]"
}
  • Repeat the same modification for the xconnect-xp0.json file.
  • Save and Retry Installation:
    • Save both JSON files after making the changes.
  • Re-run the Sitecore installation PowerShell script.

    Additional Notes

    • Security Considerations: Setting TrustServerCertificate to true is a quick fix for development environments. However, for production environments, it’s recommended to install a certificate from a trusted Certificate Authority (CA) on the SQL Server to ensure secure and trusted communication.
    • Error Still Persists?: Double-check the JSON modifications and ensure the SQL Server is accessible from your machine. If issues persist, verify firewall settings and SQL Server configuration.

    Conclusion

    The “Certificate chain was issued by an authority that is not trusted” error during Sitecore installation is a direct result of Microsoft’s enhanced security measures in Windows 11. By updating the Sitecore configuration files to include the TrustServerCertificate setting, you can bypass this error and complete the installation successfully.

    For a smoother experience in production environments, consider implementing a properly signed SSL certificate for your SQL Server.

    If you’ve encountered similar issues or have additional tips, feel free to share them in the comments below!

    Sending Apple Push Notification for Live Activities Using .NET

    In the evolving world of app development, ensuring real-time engagement with users is crucial. Apple Push Notification Service (APNs) enables developers to send notifications to iOS devices, and with the introduction of Live Activities in iOS, keeping users updated about ongoing tasks is easier than ever. This guide demonstrates how to use .NET to send Live Activity push notifications using APNs.

    Prerequisites

    Before diving into the code, ensure you have the following:

    1. Apple Developer Account with access to APNs.
    2. P8 Certificate downloaded from the Apple Developer Portal.
    3. Your Team ID, Key ID, and Bundle ID of the iOS application.
    4. .NET SDK installed on your system.

    Overview of the Code

    The provided ApnsService class encapsulates the logic to interact with APNs for sending push notifications, including Live Activities. Let’s break it down step-by-step:

    1. Initializing APNs Service

    The constructor sets up the base URI for APNs:

    • Use https://api.push.apple.com for production.
    • Use https://api.development.push.apple.com for the development environment.
    _httpClient = new HttpClient { BaseAddress = new Uri("https://api.development.push.apple.com:443") };

    2. Generating the JWT Token

    APNs requires a JWT token for authentication. This token is generated using:

    • Team ID: Unique identifier for your Apple Developer account.
    • Key ID: Associated with the P8 certificate.
    • ES256 Algorithm: Uses the private key in the P8 certificate to sign the token.
    private string GetProviderToken()
    {
        double epochNow = (int)DateTime.UtcNow.Subtract(new DateTime(1970, 1, 1)).TotalSeconds;
        Dictionary<string, object> payload = new Dictionary<string, object>
        {
            { "iss", _teamId },
            { "iat", epochNow }
        };
        var extraHeaders = new Dictionary<string, object>
        {
            { "kid", _keyId },
            { "alg", "ES256" }
        };
    
        CngKey privateKey = GetPrivateKey();
    
        return JWT.Encode(payload, privateKey, JwsAlgorithm.ES256, extraHeaders);
    }

    3. Loading the Private Key

    The private key is extracted from the .p8 file using BouncyCastle.

    private CngKey GetPrivateKey()
    {
        using (var reader = File.OpenText(_p8CertificateFileLocation))
        {
            ECPrivateKeyParameters ecPrivateKeyParameters = (ECPrivateKeyParameters)new PemReader(reader).ReadObject();
            var x = ecPrivateKeyParameters.Parameters.G.AffineXCoord.GetEncoded();
            var y = ecPrivateKeyParameters.Parameters.G.AffineYCoord.GetEncoded();
            var d = ecPrivateKeyParameters.D.ToByteArrayUnsigned();
    
            return EccKey.New(x, y, d);
        }
    }

    4. Sending the Notification

    The SendApnsNotificationAsync method handles:

    • Building the request with headers and payload.
    • Adding apns-push-type as liveactivity for Live Activity notifications.
    • Adding a unique topic for Live Activities by appending .push-type.liveactivity to the Bundle ID.
    public async Task SendApnsNotificationAsync<T>(string deviceToken, string pushType, T payload) where T : class
        {
            var jwtToken = GetProviderToken();
            var jsonPayload = JsonSerializer.Serialize(payload);
            // Prepare HTTP request
            var request = new HttpRequestMessage(HttpMethod.Post, $"/3/device/{deviceToken}")
            {
                Content = new StringContent(jsonPayload, Encoding.UTF8, "application/json")
            };
            request.Headers.Add("authorization", $"Bearer {jwtToken}");
            request.Headers.Add("apns-push-type", pushType);
            if (pushType == "liveactivity")
            {
                request.Headers.Add("apns-topic", _bundleId + ".push-type.liveactivity");
                request.Headers.Add("apns-priority", "10");
            }
            else
            {
                request.Headers.Add("apns-topic", _bundleId);
            }
            request.Version = new Version(2, 0);
            // Send the request
            var response = await _httpClient.SendAsync(request);
            if (response.IsSuccessStatusCode)
            {
                Console.WriteLine("Push notification sent successfully!");
            }
            else
            {
                var responseBody = await response.Content.ReadAsStringAsync();
                Console.WriteLine($"Failed to send push notification: {response.StatusCode} - {responseBody}");
            }
        }

    Sample Usage

    Here’s how you can use the ApnsService class to send a Live Activity notification:

    var apnsService = new ApnsService();
     // Example device token (replace with a real one)
     var pushDeviceToken = "808f63xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
     // Create the payload for the Live Activity
     var notificationPayload = new PushNotification
     {
         Aps = new Aps
         {
             Timestamp = DateTimeOffset.UtcNow.ToUnixTimeSeconds(),
             Event = "update",
             ContentState = new ContentState
             {
                 Status = "Charging",
                 ChargeAmount = "65 Kw",
                 DollarAmount = "$11.80",
                 timeDuration = "00:28",
                 Percentage = 80
             },
         }
     };
     await apnsService.SendApnsNotificationAsync(pushDeviceToken, "liveactivity", notificationPayload);

    Key Points to Remember

    1. JWT Token Validity: Tokens expire after 1 hour. Ensure you regenerate tokens periodically.
    2. APNs Endpoint: Use the correct environment (production or development) based on your app stage.
    3. Error Handling: Handle HTTP responses carefully. Common issues include invalid tokens or expired certificates.

    Debugging Tips

    • Ensure your device token is correct and valid.
    • Double-check your .p8 file, Team ID, Key ID, and Bundle ID.
    • Use tools like Postman to test your APNs requests independently.

    Conclusion

    Sending Live Activity push notifications using .NET involves integrating APNs with proper authentication and payload setup. The ApnsService class demonstrated here provides a robust starting point for developers looking to enhance user engagement with real-time updates.🚀

    Mastering Feature Flag Management with Azure Feature Manager

    In the dynamic realm of software development, the power to adapt and refine your application’s features in real-time is a game-changer. Azure Feature Manager emerges as a potent tool in this scenario, empowering developers to effortlessly toggle features on or off directly from the cloud. This comprehensive guide delves into how Azure Feature Manager can revolutionize your feature flag control, enabling seamless feature introduction, rollback capabilities, A/B testing, and tailored user experiences.

    Introduction to Azure Feature Manager

    Azure Feature Manager is a sophisticated component of Azure App Configuration. It offers a unified platform for managing feature flags across various environments and applications. Its capabilities extend to gradual feature rollouts, audience targeting, and seamless integration with Azure Active Directory for enhanced access control.

    Step-by-Step Guide to Azure App Configuration Setup

    Initiating your journey with Azure Feature Manager begins with setting up an Azure App Configuration store. Follow these steps for a smooth setup:

    1. Create Your Azure App Configuration: Navigate to the Azure portal and initiate a new Azure App Configuration resource. Fill in the required details and proceed with creation.
    2. Secure Your Access Keys: Post-creation, access the “Access keys” section under your resource settings to retrieve the connection strings, crucial for your application’s connection to the Azure App Configuration.

    Crafting Feature Flags

    To leverage feature flags in your application:

    1. Within the Azure App Configuration resource, click on “Feature Manager” and then “+ Add” to introduce a new feature flag.
    2. Identify Your Feature Flag: Name it thoughtfully, as this identifier will be used within your application to assess the flag’s status

    Application Integration Essentials

    Installing Required NuGet Packages

    Your application necessitates specific packages for Azure integration:

    • Microsoft.Extensions.Configuration.AzureAppConfiguration
    • Microsoft.FeatureManagement.AspNetCore

    These can be added via your IDE or through the command line in your project directory:

    dotnet add package Microsoft.Extensions.Configuration.AzureAppConfiguration
    dotnet add package Microsoft.FeatureManagement.AspNetCore

    Application Configuration

    Modify your appsettings.json to include your Azure App Configuration connection string:

    {
      "ConnectionStrings": {
        "AppConfig": "Endpoint=https://<your-resource-name>.azconfig.io;Id=<id>;Secret=<secret>"
      }
    }

    Further, in Program.cs (or Startup.cs for earlier .NET versions), ensure your application is configured to utilize Azure App Configuration and activate feature management:

    var builder = WebApplication.CreateBuilder(args);
    
    builder.Configuration.AddAzureAppConfiguration(options =>
    {
        options.Connect(builder.Configuration["ConnectionStrings:AppConfig"])
               .UseFeatureFlags();
    });
    
    builder.Services.AddFeatureManagement();

    Implementing Feature Flags

    To verify a feature flag’s status within your code:

    using Microsoft.FeatureManagement;
    
    public class FeatureService
    {
        private readonly IFeatureManager _featureManager;
    
        public FeatureService(IFeatureManager featureManager)
        {
            _featureManager = featureManager;
        }
    
        public async Task<bool> IsFeatureActive(string featureName)
        {
            return await _featureManager.IsEnabledAsync(featureName);
        }
    }

    Advanced Implementation: Custom Targeting Filter

    Go to Azure and modify your feature flag

    Make sure the “Default Percentage” is set to 0 and in this scenario we want to target specific user based on its email address

    For user or group-specific targeting, We need to implement ITargetingContextAccessor. In below example we target based on its email address where the email address comes from JWT

    using Microsoft.FeatureManagement.FeatureFilters;
    using System.Security.Claims;
    
    namespace SampleApp
    {
        public class B2CTargetingContextAccessor : ITargetingContextAccessor
        {
            private const string TargetingContextLookup = "B2CTargetingContextAccessor.TargetingContext";
            private readonly IHttpContextAccessor _httpContextAccessor;
    
            public B2CTargetingContextAccessor(IHttpContextAccessor httpContextAccessor)
            {
                _httpContextAccessor = httpContextAccessor;
            }
    
            public ValueTask<TargetingContext> GetContextAsync()
            {
                HttpContext httpContext = _httpContextAccessor.HttpContext;
    
                //
                // Try cache lookup
                if (httpContext.Items.TryGetValue(TargetingContextLookup, out object value))
                {
                    return new ValueTask<TargetingContext>((TargetingContext)value);
                }
    
                ClaimsPrincipal user = httpContext.User;
    
                //
                // Build targeting context based off user info
                TargetingContext targetingContext = new TargetingContext
                {
                    UserId = user.FindFirst("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress")?.Value,
                    Groups = new string[] { }
                };
    
                //
                // Cache for subsequent lookup
                httpContext.Items[TargetingContextLookup] = targetingContext;
    
                return new ValueTask<TargetingContext>(targetingContext);
            }
        }
    }

    in Program.cs (or Startup.cs for earlier .NET versions), modify your Feature Management to use targeting filter

        builder.Services.AddFeatureManagement().WithTargeting<B2CTargetingContextAccessor>();

    You also need to pass the targeting context to the feature manager

    using Microsoft.FeatureManagement;
    
    public class FeatureService
    {
        private readonly IFeatureManager _featureManager;
        private readonly ITargetingContextAccessor _targetContextAccessor;
    
        public FeatureService(IFeatureManager featureManager, ITargetingContextAccessor targetingContextAccessor)
        {
            _featureManager = featureManager;
    _targetContextAccessor = targetingContextAccessor;
        }
    
        public async Task<bool> IsFeatureActive()
        {
            return await _featureManager.IsEnabledAsync("UseLocationWebhook", _targetContextAccessor);
        }
    }

    Simplifying API Testing in Postman: Auto-refresh OAuth Tokens with Pre-request Scripts

    Introduction:

    Welcome to a quick guide on enhancing your API testing workflow in Postman! If you frequently work with APIs that require OAuth tokens, you know the hassle of manually refreshing tokens. This blog post will show you how to automate this process using Pre-request scripts in Postman.

    What You Need:

    • Postman installed on your system.
    • API credentials (Client ID, Client Secret) for the OAuth token endpoint.

    Step 1: Setting Up Your Environment

    • Open Postman and select your workspace.
    • Go to the ‘Environments’ tab and create a new environment (e.g., “MyAPIEnvironment”).
    • Add variables like accessToken, clientId, clientSecret, and tokenUrl.

    Step 2: Creating the Pre-request Script

    • Go to the ‘Pre-request Scripts’ tab in your request or collection.
    • Add the following JavaScript code:
    if (!pm.environment.get('accessToken') || pm.environment.get('isTokenExpired')) {
        const getTokenRequest = {
            url: pm.environment.get('tokenUrl'),
            method: 'POST',
            header: 'Content-Type:application/x-www-form-urlencoded',
            body: {
                mode: 'urlencoded',
                urlencoded: [
                    { key: 'client_id', value: pm.environment.get('clientId') },
                    { key: 'client_secret', value: pm.environment.get('clientSecret') },
                    { key: 'grant_type', value: 'client_credentials' }
                ]
            }
        };
    
        pm.sendRequest(getTokenRequest, (err, res) => {
            if (err) {
                console.log(err);
            } else {
                const jsonResponse = res.json();
                pm.environment.set('accessToken', jsonResponse.access_token);
                pm.environment.set('isTokenExpired', false);
            }
        });
    }

    Step 3: Using the Access Token in Your Requests

    • In the ‘Authorization’ tab of your API request, select ‘Bearer Token’ as the type.
    • For the token, use the {{accessToken}} variable.

    Step 4: Testing and Verification

    • Send your API request.
    • The Pre-request script should automatically refresh the token if it’s not set or expired.
    • Check the Postman Console to debug or verify the token refresh process.

    Conclusion: Automating token refresh in Postman saves time and reduces the error-prone process of manual token updates. With this simple Pre-request script, your OAuth token management becomes seamless, letting you focus more on testing and less on token management.

    Further Reading:

    Semantically Generating NuGet Package Versions: Best Practices Using Branch Conventions in Azure DevOps Pipelines

    Learn how to streamline NuGet package versioning in Azure DevOps pipelines by generating semantic versions based on branch conventions. Proper versioning is essential for effective package management, and semantic versioning ensures compatibility and clear communication of changes.

    a few main use cases for this e.g when you want to share schema of common objects or library across different micro services/API but at the same time you would be able to make a minor changes and try it on your micro service before it is being merged to master therefore you want to create a nuget package version that is just for development or testing purpose before it is going to be merged. This is all possible and is managed through versioning convention

    A few things to look below are – the variables (Major, Minor, Patch, versionPatch, versionNumber) and you can also look at how we have a task to append “alpha” and you can also change to “beta” to the version variable when the branch is not master. You also need to set the versioningScheme on nuget pack to use the version variable that you defined above versionNumber

    For the stable version now you can see in nuget package manager and make sure to untick “Prerelease”

    While for the version comes off the branch now you need to tick “include prerelease”

    Sample pipelines yml below

    trigger:
      batch: true
      branches:
        include:
        - '*'
    
    pool:
      vmImage: ubuntu-latest
    
    variables:  
      projectName: 'Contoso.Messaging.csproj'
      projectPath: '**/Contoso.Messaging.csproj'
      buildPlatform: 'Any CPU'
      buildConfiguration: 'Release'
      Major: '1'
      Minor: '0'
      Patch: '0'
      versionPatch: $[counter(variables['Patch'], 0)]
      versionNumber: $(Major).$(Minor).$(versionPatch)
    
    steps:
    
    # Add this Command to Include the .NET 6 SDK
    - task: UseDotNet@2
      displayName: Use .NET 6.0
      inputs:
        packageType: 'sdk'
        version: '6.0.x'
    
    - task: DotNetCoreCLI@2
      displayName: 'Restore'
      inputs:
        command: 'restore'
        projects: '$(projectPath)'
    
    - task: DotNetCoreCLI@2
      displayName: 'Build'
      inputs:
        command: 'build'
        arguments: '--configuration $(buildConfiguration) -p:Version=$(versionNumber)'
        projects: '$(projectPath)'
        
    - script: echo '##vso[task.setvariable variable=versionNumber]$(versionNumber)-alpha'
      displayName: "Set Nuget package version number"
      condition: ne(variables['Build.SourceBranchName'], 'master')
    
    - task: DotNetCoreCLI@2
      displayName: 'Pack'
      inputs:
        command: 'pack'
        packagesToPack: '**/*.csproj'
        versioningScheme: 'byEnvVar'
        versionEnvVar: 'versionNumber'
        outputDir: '$(Build.ArtifactStagingDirectory)'
    
    - task: NuGetAuthenticate@0
      displayName: 'NuGet Authenticate'
    
    - task: NuGetCommand@2
      displayName: 'NuGet push'
      inputs:
        command: push
        nuGetFeedType: 'internal'
        publishVstsFeed: 'xxxxxxxxxxxxxxxxxx'
        allowPackageConflicts: true

    Read and remove scheduled message in Azure Service Bus

    Ever wondered on how you can remove the scheduled message in Service bus topic or queue? We had a bug where one of our service keep queuing the messages while its not meant to be queued and we deployed the fix but in order to test the fix we need to make sure there are no messages are being scheduled therefore we need to remove all scheduled messages to test if the fix is working

    At the same time you can also use the same code to check your messages if it is all being scheduled properly based on your logic

    I was expecting the service bus explorer on azure portal allow us to peek into this scheduled messages but unfortunately it doesn’t have this feature

    for service bus topic you can use below code

    class Program
        {
    
            // Connection String for the namespace can be obtained from the Azure portal under the
            // 'Shared Access policies' section.
            const string ServiceBusConnectionString = "[Servicebus connection string with entity path]";
            static ITopicClient topicClient;
            static IMessageReceiver messageReceiver;
    
            static void Main(string[] args)
    
            {
                MainAsync().GetAwaiter().GetResult();
            }
    
            static async Task MainAsync()
            {
                var sbConnStringBuilder = new ServiceBusConnectionStringBuilder(ServiceBusConnectionString);
                topicClient = new TopicClient(sbConnStringBuilder);
                Console.WriteLine("======================================================");
                Console.WriteLine("Press any key to exit..");
                Console.WriteLine("======================================================");
    
                messageReceiver = new MessageReceiver(sbConnStringBuilder);
    
                Message message = await messageReceiver.PeekAsync();
    
                // Message with property ScheduledEnqueueTimeUtc not null means it is a scheduled message
    
                while (message != null)
                {
                    if (message != null && message.ScheduledEnqueueTimeUtc != null)
                    {
                        // Remove the scheduled message
                        await topicClient.CancelScheduledMessageAsync(message.SystemProperties.SequenceNumber);
                    }
                    message = await messageReceiver.PeekAsync();
                }
    
                Console.ReadKey();
                await topicClient.CloseAsync();
            }
    
        }

    For the service bus queue you can use code below

    class Program
        {
    
            // Connection String for the namespace can be obtained from the Azure portal under the
            // 'Shared Access policies' section.
            const string ServiceBusConnectionString = "[Servicebus connection string with entity path]";
            static IQueueClient queueClient;
            static IMessageReceiver messageReceiver;
    
            static void Main(string[] args)
    
            {
                MainAsync().GetAwaiter().GetResult();
            }
    
            static async Task MainAsync()
            {
                var sbConnStringBuilder = new ServiceBusConnectionStringBuilder(ServiceBusConnectionString);
                queueClient = new QueueClient(sbConnStringBuilder);
                Console.WriteLine("======================================================");
                Console.WriteLine("Press any key to exit..");
                Console.WriteLine("======================================================");
    
                messageReceiver = new MessageReceiver(sbConnStringBuilder);
    
                Message message = await messageReceiver.PeekAsync();
    
                // Message with property ScheduledEnqueueTimeUtc not null means it is a scheduled message
    
                while (message != null)
                {
                    if (message != null && message.ScheduledEnqueueTimeUtc != null)
                    {
                        // Remove the scheduled message
                        await queueClient.CancelScheduledMessageAsync(message.SystemProperties.SequenceNumber);
                    }
                    message = await messageReceiver.PeekAsync();
                }
    
                Console.ReadKey();
                await queueClient.CloseAsync();
            }

    Build a Secure Integration Tests with Azure Key vaults in Azure DevOps

    Scenario: We have an integration tests written in .NET and its using NUnit, We don’t want to store the API Key and all sensitive informations on the repository instead we want it to retrieve all the keys from azure key vaults. At the same time we also would like the Test Engineer to be able to run it on their local environment

    One way to achieve it we can use Test parameters feature from NUnit

    Add .runsettings in your project and this file will be used for local development/testing only and should not be checked in with the values, and the format can be something like below. If you want to know more details, you can check it here

    <?xml version="1.0" encoding="utf-8" ?>
    <RunSettings>
    	<TestRunParameters>
    		<Parameter name="ApiKey" value="" />
    		<Parameter name="RefreshToken" value="" />
    	</TestRunParameters>
    </RunSettings>

    Most importantly, you need to configure your IDE below

    1. Make sure autodetection of runsettings in enabled in Visual Studio by checking this checkbox: Tools > Options > Test > Auto Detect runsettings Files.
    2. Make sure you have created your runsettings file in the root of your solution, not your project root.
    3. If all else fails and your tests still can’t find your .runsettings file, you can specify the file manually in the Test Explorer by selecting Options > Configure Run Settings > Select solution wide Run Settings file.

    For Visual Studio on Mac – you need to do below

    Add the runsetting file path to the project file and it will do the work.

    <Project Sdk=“Microsoft.NET.Sdk”>
    <PropertyGroup>
    <RunSettingsFilePath>$(MSBuildProjectDirectory)\.runsettings</RunSettingsFilePath>
    </PropertyGroup>
    …
    </Project>

    In your test class, you can retrieve the test parameters through TestContext.Parameters

    [TestFixture]
        public class MyTests
        {
            private readonly string _apiKey;
            private readonly string _refreshToken;
    
            [SetUp]
            public async Task PopulateConfigs()
            {
                _apiKey = TestContext.Parameters["ApiKey"];
                _refreshToken = TestContext.Parameters["RefreshToken"];
    
            }
    }

    On the Azure Pipelines Yml file, this is how you retrieve it from the keyvaults and inject the TestRun Parameters as arguments

    pool:
      vmImage: ubuntu-latest
    
    trigger: none
    pr: none
    schedules:
    - cron: "0 20 * * Sun,Mon,Tue,Wed,Thu"
      displayName: Daily morning build
      branches:
        include:
        - master
      always: true
    
    variables:
      - name: dotnetVersion
        value: '7.0.x'
    
    stages:
    - stage:
      displayName: Run e2e .NET tests
      jobs:
      - job:
        displayName: build job
        steps:
        - task: UseDotNet@2
          displayName: Use dotnet $(dotnetVersion)
          inputs:
            packageType: sdk
            version: $(dotnetVersion)
        - task: DotNetCoreCLI@2
          displayName: dotnet restore
          inputs:
            command: 'restore'
        - task: DotNetCoreCLI@2
          displayName: 'dotnet build'
          inputs:
            command: 'build'
        - task: AzureKeyVault@2
          inputs:
            azureSubscription: 'My Service Principal'
            KeyVaultName: 'my-keyvault-dev'
            SecretsFilter: '*'
            RunAsPreJob: false
        - task: DotNetCoreCLI@2
          displayName: 'dotnet test'
          inputs:
            command: 'test'
            arguments: '-- "TestRunParameters.Parameter(name=\"ApiKey\", value=\"$(ApiKey)\")" -- "TestRunParameters.Parameter(name=\"RefreshToken\", value=\"$(RefreshToken)\")"'
    
    

    $(ApiKey) and $(RefreshToken) is mapped with your Azure Keyvault secrets name

    How Fear Based Leaders Destroy Employee Morale and Performance

    Fear is a powerful emotion that can motivate us to act or paralyze us from taking action. In the workplace, some leaders may use fear as a tool to influence their employees’ attitudes, values, or behaviors. However, this approach can have negative consequences for both the leaders and their teams. In this article, we will explore how fear-based leadership can destroy employee morale and performance, and what leaders can do instead to create a culture of psychological safety and empowerment.

    I have learned of some instances where, upon receiving a resignation letter from an employee in my previous organization, the manager tried to dissuade them from leaving by saying “Don’t resign or else you will regret it” and citing examples of former employees who faced difficulties in their new jobs. I find this to be a very unprofessional and unethical tactic by the manager. A true leader would be supportive of their team member’s career aspirations and wish them well for their future endeavors. They would also recognize that the employee might have the potential to start their own successful business someday or be a successful leader.

    What is fear-based leadership?

    Fear-based leadership is a style of management that relies on threats, punishments, intimidation, or coercion to achieve desired outcomes. Fear-based leaders may use various tactics to instill fear in their employees, such as:

    • Setting unrealistic expectations and deadlines
    • Micromanaging and controlling every aspect of work
    • Criticizing and blaming employees for mistakes
    • Withholding praise and recognition
    • Creating a competitive and hostile work environment
    • Ignoring or dismissing employees’ opinions and feedback
    • Threatening employees with job loss, demotion, or pay cuts

    Fear-based leaders may believe that fear is an effective motivator that can drive performance and productivity. They may also think that fear can help them maintain authority and control over their teams. However, research shows that fear-based leadership has many negative effects on both individuals and organizations.

    The effects of fear-based leadership

    Fear-based leadership can have detrimental impacts on employee morale and performance in various ways:

    • It demoralizes people: Fear-based leadership creates a power imbalance that erodes trust,
      respect, and dignity among employees. Employees may feel insecure, anxious, depressed,
      or hopeless about their work situation. They may also lose their sense of purpose and meaning in their work.
    • It creates a breeding ground for resentment: Some people may react with anger, frustration, or defiance to fear-based leadership. They may resent their leader for treating them unfairly or disrespectfully. They may also harbor negative feelings toward their colleagues who comply with or support the leader’s actions.
    • It impedes communication: Fear-based leadership discourages open and honest communication among employees.
      Employees may be afraid to speak up or share their ideas for fear of being ridiculed or punished by their leader. They may also avoid giving feedback or asking for help from their peers for fear of being seen as weak or incompetent. This leads to poor collaboration and information sharing within teams.
    • It inhibits innovation: Fear-based leadership stifles creativity and learning among employees. Employees may be reluctant to try new things or experiment with different solutions for fear of making mistakes or failing. They may also resist change or feedback for fear of losing their status quo or comfort zone. This hinders innovation and improvement within organizations.
    • It reduces engagement: Fear-based leadership lowers employee engagement levels. Employees may feel detached from their work goals and outcomes. They may also feel less motivated to perform well or go beyond expectations. They may only do the minimum required work to avoid negative consequences from their leader. This affects productivity and quality within organizations.

    What leaders can do instead

    Instead of using fear as a motivational tool for employees, leaders should create a culture of psychological safety
    and empowerment within organizations. Psychological safety is “a shared belief held by members of a team that the team is safe for interpersonal risk taking”.

    It means that employees feel comfortable expressing themselves without fearing negative repercussions from others.

    Empowerment is “the process of enhancing feelings of self-efficacy among organizational members through identification with organizational goals”. It means that employees feel confident in their abilities and have autonomy over their work decisions.

    Leaders who foster psychological safety and empowerment among employees can benefit from:

    • Higher trust: Employees trust their leaders who treat them with respect, care, and fairness.
      They also trust each other who support them, listen to them, and collaborate with them. Trust enhances teamwork,
      cooperation, and loyalty within organizations.
    • Higher morale: Employees feel valued, appreciated, and recognized by their leaders who praise them, reward them,

    Power Apps – Mount a SQL Server table as an entity in Dataverse

    Business Case: We have a need where we have an existing database from a legacy app and we really enjoyed how easy and fast it is to use Power Apps (Model Driven app) to access entities in dataverse. So can we use mount an external table to Dataverse and the answer is yes, it is possible and it is straight forward

    Add a SQL Server Connection in Power Apps – “Authentication Type : SQL Server Authentication” for this POC. But I think the best practice is to use Service Principal (Azure AD Application)

    On your SQL Server side (in my case i am using Azure), you need to whitelist the Power Platform IP Addresses – You can get the list of IP Addresses from here – Managed connectors outbound IP addresses

    On the Power Apps – Go to Tables and Select New Table then select “New table from external data

    Select the connection – SQL Server that we created before and then select the SQL Server table that you want to mount

    Once all done, now you can see all the records in SQL server as virtual entity in your dataverse

    Page 2 of 19

    Powered by WordPress & Theme by Anders Norén