Technical Insights: Azure, .NET, Dynamics 365 & EV Charging Architecture

Category: .NET Page 1 of 5

Dotnet general

Explore Key Features of C# 14 for Developers

Comprehensive Analysis of C# 14: Key Features and Enhancements

  • C# 14 introduces significant features for enhancing developer productivity and performance.
  • Key enhancements include implicit span conversions, extended `nameof` capabilities, and lambda expression improvements.
  • New features like the contextual `field` keyword and partial constructors promote modular design and cleaner code.
  • User-defined compound assignment operators and dictionary expressions improve performance and simplify code.
  • C# 14 focuses on memory safety, streamlined syntax, and community-driven enhancements.

Table of Contents

Enhanced Span Support for Memory Optimization

One of the standout features of C# 14 is the first-class support for System.Span<T> and System.ReadOnlySpan<T>, which is indicative of a broader emphasis on memory safety and performance optimization in high-efficiency scenarios such as real-time data processing and resource-constrained environments. The introduction of implicit conversions between spans and arrays significantly simplifies the handling of memory-intensive operations, allowing developers to manage memory more effectively without incurring the overhead associated with manual marshaling.

For instance, when converting a string array to a ReadOnlySpan<string>, C# 14 allows a seamless assignment:

string[] words = { "Hello", "World" };
ReadOnlySpan<string> span = words; // Implicit conversion

This change leverages runtime optimizations to minimize heap allocations, thereby making spans ideal for performance-critical applications, such as game development or Internet of Things (IoT) scenarios where every byte of memory counts. Furthermore, as the compiler has been enhanced to recognize span relationships natively, developers can now utilize spans as extension receivers and benefit from improved generic type inference, streamlining their development experience.

Extended `nameof` Capabilities for Reflection

In an increasingly complex programming landscape, understanding type names dynamically becomes essential. C# 14 enhances the capabilities of the nameof operator by enabling it to resolve unbound generic type names. This represents a significant simplification over previous versions, where types like List<int> would return cumbersome names such as “List`1”. With the new feature, invoking nameof(List)<> cleanly returns “List”.

This enhancement is particularly beneficial in the context of frameworks that rely heavily on reflection, such as serialization libraries and dependency injection containers. For example, when building error messages for generic repositories, the use of nameof can greatly improve maintainability and readability:

throw new InvalidOperationException($"{nameof(IRepository<>)} requires implementation.");

By reducing the clutter in logs caused by arity notation, developers can focus on more meaningful output, significantly improving debugging efforts and enhancing the overall developer experience. Feedback from the C# community has been instrumental in shaping this capability, as developers sought to minimize string literals in reflection-heavy code bases.

Lambda Expressions with Parameter Modifiers

C# 14 brings scalability and clarity to lambda expressions by allowing them to incorporate parameter modifiers such as ref, in, out, scoped, and ref readonly, all without needing to specify parameter types explicitly. Prior to this enhancement, developers often faced cumbersome syntax when defining output parameters, which detracted from code readability and conciseness.

The following example illustrates how this feature simplifies lambda expressions:

delegate bool TryParse<T>(string text, out T result);
TryParse<int> parse = (string s, out int result) => int.TryParse(s, out result);

This can now be rewritten more cleanly in C# 14 as:

TryParse<int> parse = (s, out result) => int.TryParse(s, out result);

The absence of explicit type annotations improves the fluency of the code, making it easier to read and write and aligning with existing lambda functionality. However, it is essential to note that modifiers like params still require explicit typing due to compiler constraints. This enhancement particularly benefits low-level interoperability scenarios where output parameters are frequently utilized, as it reduces boilerplate code and fosters a more fluid coding experience.

Field Keyword in Auto-Implemented Properties

C# 14 introduces the contextual field keyword, which greatly streamlines the handling of auto-implemented properties by granting direct access to compiler-generated backing fields. This improvement is particularly notable in scenarios requiring null validation or other property logic, which traditionally necessitated verbose manual backing field management.

Consider this example from prior versions:

private string _message;

public string Message
{
    get => _message;
    set => _message = value ?? throw new ArgumentNullException(nameof(value));
}

With C# 14, developers can eliminate redundancy by utilizing the new field keyword:

public string Message
{
    get;
    set => field = value ?? throw new ArgumentNullException(nameof(value));
}

Here, field acts as a placeholder for the implicit backing field, enhancing readability and maintainability while preserving encapsulation principles. However, users must remain mindful of potential symbol collisions, as using field as an identifier within class members requires disambiguation (e.g., @field or this.field).

This change not only aids in reducing boilerplate but also encourages more concise property implementations, ultimately resulting in cleaner, more maintainable code across projects.

Partial Events and Constructors for Modular Design

With the expanding complexity of software architectures, C# 14 introduces partial events and constructors, which enhance code modularity and facilitate a more organized approach to large codebases. By allowing event and constructor definitions to be split across multiple files, developers can structure their code more flexibly and responsively.

For instance, when defining a logger class, developers can now separate the event declaration and implementation:

// File1.cs
public partial class Logger
{
    public partial event Action<string>? LogEvent;
}

// File2.cs
public partial class Logger
{
    public partial event Action<string>? LogEvent
    {
        add => Subscribe(value);
        remove => Unsubscribe(value);
    }
}

This flexibility extends to partial constructors as well, enabling developers to distribute initializer logic across different files. While only one declaration can employ primary constructor syntax (e.g., Logger(string source)), this capability fosters enhanced collaboration and better organization within teams working on large-scale applications or utilizing code generation tools.

The implications of this feature are significant for source generation and modern architecture patterns, where the separation of concerns and maintainability are paramount. By allowing tool-generated code to inject validation and initialization logic into user-defined constructors, this enhancement streamlines workflows and supports the continuous evolution of application architectures.

Extension Members for Augmenting Types

As a preview feature, C# 14 will revolutionize the way developers can augment existing types through extension members, allowing the addition of properties and static members alongside the traditional extension methods. This capability leads to a more intuitive and discoverable syntax, particularly beneficial for extending closed-source or interface-based types without necessitating complex inheritance setups.

For example, adding an IsEmpty property to the IEnumerable<T> interface can now be accomplished straightforwardly:

public extension EnumerableExtensions for IEnumerable<T>
{
    public bool IsEmpty => !this.Any();
}

This syntax not only enhances clarity but also promotes code reuse and modularity:

if (strings.IsEmpty) return;

In addition, static extension members bolster usability and flexibility when dealing with types that developers cannot directly modify. The implications for team projects and libraries are substantial, as this feature allows for richer connectivity across application codebases while preserving the integrity of existing types.

The extension member functionality is part of an ongoing effort to make C# more expressive and adaptable, fulfilling developers’ needs for extended functionality while maintaining core principles of object-oriented programming. As this feature matures, developers can look forward to an enriched language experience that aligns more closely with modern programming paradigms.

Null-Conditional Assignment for Safe Mutation

Null safety continues to be a core concern in modern development, and C# 14 introduces a compelling enhancement to null-conditional operators: they can now be utilized on assignment targets. This evolution allows for more concise syntax and safer code execution, as the language can now intelligently bypass assignments for null objects without requiring explicit null checks.

For example, prior to C# 14, developers would write:

if (customer != null) customer.Order = GetOrder();

With the introduction of null-conditional assignment, this logic simplifies to:

customer?.Order = GetOrder();

In this case, if customer is null, the assignment of Order is gracefully skipped, significantly reducing overhead for conditional checks. This also applies to indexed assignments, as shown in the following example:

dict?["key"] = value; // Assigns only if dict is non-null

While these enhancements integrate seamlessly into existing null-coalescing patterns, it is crucial to note the limitations; for instance, chaining assignments (e.g., obj?.A?.B = value) will result in compile-time failures if any intermediary references are null. Nonetheless, this feature represents a significant step forward in safeguarding against null reference exceptions, enhancing the overall reliability of C# applications.

User-Defined Compound Assignment Operators

One of the most innovative features in C# 14 is the ability for developers to overload compound assignment operators such as += and -=. This grants developers the ability to optimize performance during mutation operations by directly altering existing objects rather than creating new instances, which is especially beneficial in high-efficiency contexts like mathematical computations.

For instance, a matrix class could utilize user-defined compound operators as follows:

public class Matrix
{
    public double[,] Values; // Matrix values
    
    public void operator +=(Matrix other)
    {
        for (int i = 0; i < Rows; i++)
            for (int j = 0; j < Cols; j++)
                Values[i, j] += other.Values[i, j];
    }
}

This syntax supports in-place mutations, avoiding the need for redundant memory allocations, which can be critical when dealing with large data structures. Notably, the operator must adhere to specific constraints, returning void and omitting static modifiers, due to its in-place nature, enforcing consistency with language rules to prevent unexpected behavior.

Through the strategic utilization of user-defined compound assignment operators, developers can achieve significant performance gains, with benchmarks indicating up to 40% fewer allocations in computation-intensive workloads. This capability empowers high-performance applications to operate seamlessly under heavy load, enhancing the robustness of numerical algorithms and data processing workflows.

Dictionary Expressions and Collection Enhancements

While still in development, C# 14 introduces the concept of dictionary expressions, poised to revolutionize how developers initialize dictionaries. This feature aims to provide an intuitive syntax akin to other collection initializers, allowing for cleaner and more concise code:

Dictionary<string, int> ages = ["Alice": 30, "Bob": 35];

This syntax reduces typing overhead and enhances readability compared to traditional dictionary initialization methods. Additionally, simultaneous enhancements to collection expressions allow for optimized initialization of collections, enabling more efficient operations during startup phases.

For example, using collection expressions like [1, 2, ..existing] can lead to improved startup performance due to internal optimizations that minimize individual Add calls. These enhancements collectively serve to streamline the coding experience, enabling developers to focus on core logic rather than boilerplate initialization code and improving the overall performance of applications.

Compiler Breaking Changes and Adoption Guidance

With any significant language update, developers must navigate breaking changes to ensure smooth transitions to new features. C# 14 introduces specific alterations that warrant careful attention. One notable change is the treatment of the scoped modifier in lambda expressions, which has transitioned into a reserved keyword. This shift necessitates the use of the @ sign for identifiers previously named scoped:

var v = (scoped s) => { ... }; // Error: 'scoped' is reserved

In this case, developers should use @scoped if they need to reference that identifier.

Moreover, the new implicit span conversions may introduce ambiguities in overload resolution, especially in scenarios involving method overloading between Span<T> and standard arrays. To mitigate this risk, developers should employ explicit casting to .AsSpan() or utilize the OverloadResolutionPriorityAttribute to guide the compiler on intended overload selections.

To ensure a successful transition, developers are advised to conduct thorough testing with the .NET 10 SDK and address any warnings or breaking changes using #pragma directives or by carefully managing type disambiguations. This proactive approach will facilitate embracing C# 14’s enhancements while maintaining robust codebases.

Conclusion: Strategic Impact and Future Directions

In summary, C# 14 embodies a substantial leap forward in refining the C# language, equipping developers with enhanced language ergonomics and performance-oriented features. The focus on implicit spans, improved null handling, and the introduction of the contextual field keyword significantly aligns with modern development paradigms that prioritize memory safety and streamlined syntax.

Developers should consider incorporating recommendations such as adopting the field keyword to reduce boilerplate in property handling, leveraging partial events and constructors in extensive codebases, and conducting audits on compound operators within numerical computation libraries to uncover allocation hotspots.

Looking ahead, as the ecosystem surrounding C# evolves, further iterations may finalize features like dictionary expressions and expand support for both static and instance extension members. As teams stabilize their tooling around .NET 10, placing a priority on these enhancements will empower their applications to excel within a rapidly advancing technological landscape. Emphasizing a balance between preview features and production stability will be crucial as organizations seek to capitalize on the opportunities presented by C# 14 and beyond.

FAQs

What are the main enhancements in C# 14?
C# 14 introduces significant improvements like implicit span conversions, extended capabilities for the nameof operator, and enhancements to lambda expressions, among others, aimed at improving developer productivity and code quality.

How does C# 14 improve memory management?
With the first-class support for System.Span<T> and the introduction of null-conditional assignment, C# 14 optimizes memory handling by reducing heap allocations and simplifying null checks.

What should developers be cautious about with breaking changes?
Developers need to navigate changes such as the reserved status of the scoped modifier and potential ambiguities with implicit span conversions to ensure smooth transitions to C# 14.

Understanding COM Components in C# for Interoperability

COM Components in C#: Enabling Interoperability in .NET Applications

  • Understanding the Component Object Model (COM) is essential for seamless technology integration.
  • C# provides robust interop features to expose and consume COM components effortlessly.
  • Proper resource management in COM is crucial for ensuring efficient memory usage.
  • Integrating AI and automation can significantly enhance COM component functionality.
  • Managing visibility and threading models is key to successful COM implementations.

Table of Contents

What is COM?

The Component Object Model (COM), developed by Microsoft, is a binary software standard that facilitates inter-process communication and enables dynamic object creation across different programming languages. Its core goal is to provide a flexible and reusable approach to component development by defining a standard interaction mechanism:

  • Language-Agnostic: COM is not tied to any programming language, allowing components to be created and consumed in various languages, thus promoting wider interoperability.
  • Object-Oriented: COM components are organized around object-oriented principles, allowing for encapsulation, inheritance, and polymorphism.

COM is vital for various Microsoft technologies such as Object Linking and Embedding (OLE), ActiveX, and COM+.

Key Concepts of COM

1. COM Interfaces and Objects

In COM, interfaces form the backbone of interaction between clients and components. Each interface comprises a collection of abstract operations that promote loose coupling. The base interface, IUnknown, supports fundamental methods, including reference counting and interface querying via the QueryInterface mechanism. Each COM interface is uniquely identified by a UUID (Universally Unique Identifier), ensuring that clients interact with the correct versions of COM objects.

2. COM in C#: Interoperability

C# provides robust support for consuming and exposing COM components through interop features. This allows for seamless interaction between native COM components and managed .NET code.

Exposing a C# class to COM requires several steps:

  1. Declare Public Interface: Define a public interface that lists the methods and properties that will be accessible to COM clients.
  2. Use COM Attributes: Apply attributes like [ComVisible(true)] and [Guid("...")] to mark classes and interfaces for import into the COM system.
  3. Register Assembly: Set your assembly to “Register for COM Interop” in project properties, allowing it to register with the Windows registry.

Example Implementation

Let’s explore a simple example of how to create a COM component in C#. Here, we will create a basic calculator that can be accessed via COM.

Step 1: Define the Interface

using System.Runtime.InteropServices;

namespace CalculatorCOM
{
    [Guid("12345678-abcd-efgh-ijkl-123456789012")]
    [ComVisible(true)]
    public interface ICalculator
    {
        double Add(double a, double b);
        double Subtract(double a, double b);
    }
}

Step 2: Implement the Interface

using System.Runtime.InteropServices;

namespace CalculatorCOM
{
    [Guid("87654321-lkjh-gfed-cba-210987654321")]
    [ComVisible(true)]
    public class Calculator : ICalculator
    {
        public double Add(double a, double b) => a + b;

        public double Subtract(double a, double b) => a - b;
    }
}

Step 3: Register for COM Interop

In your project properties, check the “Register for COM Interop” option. After building the project, the COM component will be available for use in any COM-compatible environments.

Managing COM Lifetime and Activation

COM components are not statically linked; they are activated on-demand at runtime. Clients can create instances of COM objects using system APIs like CoGetClassObject and CreateInstance. Proper resource management relies on the explicit release of object references to ensure that memory and resources are correctly freed.

Activation Example

Below is a simple C# client code demonstrating how to use the Calculator COM object:

class Program
{
    static void Main()
    {
        Type calculatorType = Type.GetTypeFromProgID("CalculatorCOM.Calculator");
        dynamic calculator = Activator.CreateInstance(calculatorType);
        
        double resultAdd = calculator.Add(5.0, 10.0);
        double resultSubtract = calculator.Subtract(15.0, 5.0);
        
        Console.WriteLine($"Addition Result: {resultAdd}");
        Console.WriteLine($"Subtraction Result: {resultSubtract}");
    }
}

Common Pitfalls and Best Practices

  • Visibility: Only public members in the interface are visible to COM clients. Members defined in the class, but not the interface, will remain hidden from COM consumers.
  • Multiple Interfaces: A class can implement multiple interfaces. The first interface marked in the class definition is treated as the default interface for COM.
  • Threading Models: Be aware of the threading models used by COM. Ensure that your components are safe in multi-threaded contexts, particularly if accessed across threads.

Integrating AI and Automation

In the rapidly evolving tech landscape, integrating AI and automation into COM components can enhance their functionality. For example, using OpenAI’s models, you could develop intelligent components that provide insights or automate complex workflows. This would not only modernize legacy systems but also increase their value and usability in contemporary applications.

Conclusion

COM components in C# present powerful opportunities for cross-language and cross-process communication. Understanding their structure, implementation, and the critical role they play in interoperability can significantly enhance your software solutions. By exposing .NET classes to COM, you can unlock the potential of legacy systems while positioning your applications for future innovation.

For further implementation examples and insights, feel free to explore my GitHub.

Also, connect with me on LinkedIn, where I share additional resources on software architecture and engineering practices.

FAQ

Understanding the Mediator Pattern: Simplifying Communication in C# with MediatR

  • Streamlined Communication: Centralizes interactions between components for clarity.
  • Decoupling: Reduces tight coupling, enhancing maintainability.
  • MediatR Integration: A robust tool for implementing the Mediator Pattern in C#.
  • Layered Architecture: Promotes separation of concerns for scalable systems.
  • Reusability: Enables components to be reused across contexts.

Table of Contents

What is the Mediator Pattern?

The Mediator Pattern is a behavioral design pattern that promotes loose coupling by centralizing communication between objects. Instead of objects interacting directly, they communicate through a mediator, which encapsulates interaction logic. This reduces dependencies, making systems easier to maintain and extend.

Problems Addressed by the Mediator Pattern

  • Tight Coupling: Direct object interactions create complex, interdependent codebases. The Mediator Pattern eliminates this by routing communication through a single point.
  • Complex Maintenance: Centralized communication simplifies debugging and updating interaction logic.
  • Scalability Issues: Decoupled components are easier to modify or replace, supporting system growth.
  • Reusability: Independent components can be reused in different contexts without modification.

Implementing the Mediator Pattern in C# with MediatR

In .NET, MediatR is a lightweight library that simplifies the Mediator Pattern, often used with Command Query Responsibility Segregation (CQRS). It enables clean separation of concerns by handling requests (commands or queries) through mediators and their handlers.

Steps to Use MediatR

  1. Install MediatR: Add the MediatR and MediatR.Extensions.Microsoft.DependencyInjection packages via NuGet.
  2. Define Requests and Handlers: Create request classes (commands or queries) and their corresponding handlers to process them.
  3. Configure Dependency Injection: Register MediatR services in your application’s dependency injection container.
  4. Dispatch Requests: Use the IMediator interface to send requests from controllers or services.

Sample Code and Explanation

Below is a practical example of using MediatR in a C# application to handle a user registration process.

Sample Code

// 1. Install MediatR packages
// Run in your project: 
// dotnet add package MediatR
// dotnet add package MediatR.Extensions.Microsoft.DependencyInjection

using MediatR;
using Microsoft.Extensions.DependencyInjection;
using System;
using System.Threading;
using System.Threading.Tasks;

// 2. Define a Command (Request)
public class RegisterUserCommand : IRequest<User>
{
    public string Username { get; set; }
    public string Email { get; set; }
}

// 3. Define the Command Handler
public class RegisterUserCommandHandler : IRequestHandler<RegisterUserCommand, User>
{
    public Task<User> Handle(RegisterUserCommand request, CancellationToken cancellationToken)
    {
        // Simulate user registration logic (e.g., save to database)
        var user = new User
        {
            Id = Guid.NewGuid(),
            Username = request.Username,
            Email = request.Email
        };
        Console.WriteLine($"User {user.Username} registered with email {user.Email}");
        return Task.FromResult(user);
    }
}

// 4. User Model
public class User
{
    public Guid Id { get; set; }
    public string Username { get; set; }
    public string Email { get; set; }
}

// 5. Program Setup and Execution
public class Program
{
    public static async Task Main()
    {
        // Configure Dependency Injection
        var services = new ServiceCollection();
        services.AddMediatR(cfg => cfg.RegisterServicesFromAssembly(typeof(Program).Assembly));
        var serviceProvider = services.BuildServiceProvider();

        // Resolve IMediator
        var mediator = serviceProvider.GetService<IMediator>();

        // Create and send a command
        var command = new RegisterUserCommand
        {
            Username = "john_doe",
            Email = "john@example.com"
        };

        var registeredUser = await mediator.Send(command);
        Console.WriteLine($"Registered User ID: {registeredUser.Id}");
    }
}

Explanation

  • Command Definition: RegisterUserCommand represents the action (registering a user) and implements IRequest<User>, indicating it returns a User object.
  • Handler Logic: RegisterUserCommandHandler processes the command, simulating user registration. In a real application, this might involve database operations.
  • Dependency Injection: MediatR is registered in the DI container, allowing IMediator to route requests to the correct handler.
  • Request Dispatch: The IMediator.Send method sends the command to its handler, keeping the calling code decoupled from the handler’s implementation.

Layered Architecture Explained

A layered architecture organizes a .NET application into distinct layers, each with specific responsibilities, enhancing maintainability and scalability.

Layer Description Typical Contents
Domain Layer Holds core business logic, entities, and domain services. Entities, value objects, domain events
Application Layer Orchestrates business use cases, mediates between domain and external layers. MediatR commands/queries, handlers, application services
Infrastructure Layer Manages technical concerns like database access and external integrations. Repositories, EF Core contexts, API clients
Presentation Layer Handles client interactions, exposing endpoints or UI. Controllers, Razor pages, minimal APIs

MediatR in Layered Architecture

  • Domain Layer: Contains pure business logic, unaware of MediatR.
  • Application Layer: Hosts MediatR commands, queries, and handlers, orchestrating business logic.
  • Infrastructure Layer: Provides services (e.g., repositories) used by handlers.
  • Presentation Layer: Sends requests via IMediator, typically from controllers.

Request Flow Example

  1. A client sends a REST request to the presentation layer (e.g., a POST to /api/users).
  2. The controller creates a command (e.g., RegisterUserCommand) and dispatches it via IMediator.
  3. MediatR routes the command to its handler in the application layer.
  4. The handler collaborates with domain entities and infrastructure services (e.g., a repository).
  5. The result is returned to the controller and sent to the client.

Conclusion

The Mediator Pattern, implemented via MediatR in C#, simplifies application architecture by decoupling components and centralizing communication. This leads to cleaner, more maintainable, and scalable codebases, especially when combined with CQRS and layered architecture. By adopting these practices, developers can build robust .NET applications that are easier to extend and test.

FAQ

  • What is the Mediator Pattern? A behavioral pattern that centralizes communication between objects, reducing direct dependencies.
  • How does MediatR improve application architecture? It decouples request handling, supports CQRS, and integrates seamlessly with dependency injection.
  • Can the Mediator Pattern be used in other languages? Yes, it’s language-agnostic and widely used in languages like Java, Python, and JavaScript.
  • What are real-world applications of the Mediator Pattern? It’s used in chat applications, event-driven systems, and microservices to manage complex interactions.

Connect with me on LinkedIn or check out my GitHub for more examples and discussions on software architecture!

Mastering SOLID Principles in C# Development

SOLID Pattern Object Oriented Design and How to Use It in C#

  • Enhances maintainability and scalability of applications.
  • Guides developers in crafting robust software systems.
  • Encourages extensible software architectures.
  • Improves reliability and promotes clean design.
  • Facilitates easier testing and mocking through abstraction.

Table of Contents

Understanding SOLID Principles

The SOLID acronym comprises five principles:

  1. Single Responsibility Principle (SRP)
  2. Open/Closed Principle (OCP)
  3. Liskov Substitution Principle (LSP)
  4. Interface Segregation Principle (ISP)
  5. Dependency Inversion Principle (DIP)

While these principles are applicable across various programming languages, they align exceptionally well with C# due to its robust type system and object-oriented capabilities. Let’s delve into each principle in detail.

Single Responsibility Principle (SRP)

Definition: A class should have only one reason to change, meaning it should only have one job or responsibility.

Implementation in C#:

Consider the following implementation where a class violates SRP by performing multiple roles:


// Bad example - multiple responsibilities
public class UserService
{
    public void RegisterUser(string email, string password)
    {
        // Register user logic
        // Send email logic
        // Log activity
    }
}

In contrast, adhering to the Single Responsibility Principle leads to a more maintainable structure:


// Better example - single responsibility
public class UserRegistration
{
    private readonly EmailService _emailService;
    private readonly LoggingService _loggingService;
    
    public UserRegistration(EmailService emailService, LoggingService loggingService)
    {
        _emailService = emailService;
        _loggingService = loggingService;
    }
    
    public void RegisterUser(string email, string password)
    {
        // Only handle user registration
        var user = new User(email, password);
        SaveUserToDatabase(user);
        
        _emailService.SendWelcomeEmail(email);
        _loggingService.LogActivity("User registered: " + email);
    }
}

Benefits of SRP:

  • Improved maintainability as each class has a distinct responsibility.
  • Easier collaboration; team members can work on separate functionalities with minimal overlap.

Open/Closed Principle (OCP)

Definition: Software entities should be open for extension but closed for modification.

Implementation in C#:

Let’s assess a traditional approach that violates the OCP:


// Bad approach
public class AreaCalculator
{
    public double CalculateArea(object shape)
    {
        if (shape is Rectangle rectangle)
            return rectangle.Width * rectangle.Height;
        else if (shape is Circle circle)
            return Math.PI * circle.Radius * circle.Radius;
        
        throw new NotSupportedException("Shape not supported");
    }
}

By implementing the OCP, we can extend functionality without altering existing code:


// Better approach using OCP
public interface IShape
{
    double CalculateArea();
}

public class Rectangle : IShape
{
    public double Width { get; set; }
    public double Height { get; set; }
    
    public double CalculateArea()
    {
        return Width * Height;
    }
}

public class Circle : IShape
{
    public double Radius { get; set; }
    
    public double CalculateArea()
    {
        return Math.PI * Radius * Radius;
    }
}

// Now we can add new shapes without modifying existing code

Benefits of OCP:

  • Encourages the development of extensible software architectures.
  • Reduces the risk of introducing bugs to existing functionalities.

Liskov Substitution Principle (LSP)

Definition: Objects of a superclass should be replaceable with objects of its subclasses without affecting the correctness of the program.

Implementation in C#:

Let’s critique this implementation which violates LSP:


// Violation of LSP
public class Rectangle
{
    public virtual int Width { get; set; }
    public virtual int Height { get; set; }
    
    public virtual int GetArea()
    {
        return Width * Height;
    }
}

public class Square : Rectangle
{
    public override int Width 
    { 
        get { return base.Width; }
        set { 
            base.Width = value;
            base.Height = value; // This breaks LSP
        }
    }
}

To adhere to LSP, we separate shape behavior into correct implementations:


// Better approach adhering to LSP
public interface IShape
{
    int GetArea();
}

public class Rectangle : IShape
{
    public int Width { get; set; }
    public int Height { get; set; }
    
    public int GetArea()
    {
        return Width * Height;
    }
}

public class Square : IShape
{
    public int Side { get; set; }
    
    public int GetArea()
    {
        return Side * Side;
    }
}

Benefits of LSP:

  • Promotes a reliable hierarchy, ensuring placeholder objects work seamlessly in place of base class instances.

Interface Segregation Principle (ISP)

Definition: Clients should not be forced to depend on interfaces they do not use.

Implementation in C#:

This example showcases a common mistake by violating ISP:


// Violation of ISP
public interface IWorker
{
    void Work();
    void Eat();
    void Sleep();
}

// Better approach with segregated interfaces
public interface IWorkable
{
    void Work();
}

public interface IEatable
{
    void Eat();
}

public interface ISleepable
{
    void Sleep();
}

Benefits of ISP:

  • Reduces side effects and promotes clean design, enhancing modularity.
  • Developers work with specific interfaces relevant to their implementations.

Dependency Inversion Principle (DIP)

Definition: High-level modules should not depend on low-level modules; both should depend on abstractions.

Implementation in C#:

Examine this flawed approach under DIP:


// Violation of DIP
public class NotificationService
{
    private readonly EmailSender _emailSender;
    
    public NotificationService()
    {
        _emailSender = new EmailSender();
    }
    
    public void SendNotification(string message, string recipient)
    {
        _emailSender.SendEmail(message, recipient);
    }
}

Implementing DIP effectively allows for a more flexible design:


// Better approach using DIP
public interface IMessageSender
{
    void SendMessage(string message, string recipient);
}

public class EmailSender : IMessageSender
{
    public void SendMessage(string message, string recipient)
    {
        // Email sending logic
    }
}

public class SMSSender : IMessageSender
{
    public void SendMessage(string message, string recipient)
    {
        // SMS sending logic
    }
}

public class NotificationService
{
    private readonly IMessageSender _messageSender;
    
    public NotificationService(IMessageSender messageSender)
    {
        _messageSender = messageSender;
    }
    
    public void SendNotification(string message, string recipient)
    {
        _messageSender.SendMessage(message, recipient);
    }
}

Benefits of DIP:

  • Enhances the flexibility and reusability of code.
  • Facilitates easier testing and mocking through abstraction.

Conclusion

Incorporating the SOLID principles in C# development results in several benefits, such as improved maintainability, enhanced testability, increased flexibility, better code organization, and reduced technical debt. As applications grow in scale and complexity, consciously applying these principles will contribute to producing robust, maintainable, and adaptable software systems.

By prioritizing SOLID principles in your coding practices, you won’t just write C# code— you’ll create software that stands the test of time.

If you’re interested in exploring further implementation examples, feel free to connect with me on LinkedIn or check out my GitHub. Happy coding!

FAQ

What are the SOLID principles?

The SOLID principles are five design principles that help software developers create more maintainable and flexible systems.

How does SRP improve code quality?

SRP enhances code quality by ensuring that a class has only one reason to change, making it easier to manage and understand.

What advantages does OCP provide?

OCP allows developers to extend functionalities without changing existing code, reducing bugs and improving code safety.

Can LSP help avoid bugs?

Yes, adhering to LSP promotes a reliable class hierarchy and helps to avoid bugs that can arise from unexpected behavior in subclasses.

Why is Dependency Inversion important?

DIP is crucial for reducing coupling and enhancing flexibility, making it easier to change or replace components without affecting high-level modules.

Microsoft Azure Service Bus Exception: “Cannot allocate more handles. The maximum number of handles is 4999”

When working with Microsoft Azure Service Bus, you may encounter the following exception:

“Cannot allocate more handles. The maximum number of handles is 4999.”

This issue typically arises due to improper dependency injection scope configuration for the ServiceBusClient. In most cases, the ServiceBusClient is registered as Scoped instead of Singleton, leading to the creation of multiple instances during the application lifetime, which exhausts the available handles.

In this blog post, we’ll explore the root cause and demonstrate how to fix this issue by using proper dependency injection in .NET applications.

Understanding the Problem

Scoped vs. Singleton

  1. Scoped: A new instance of the service is created per request.
  2. Singleton: A single instance of the service is shared across the entire application lifetime.

The ServiceBusClient is designed to be a heavyweight object that maintains connections and manages resources efficiently. Hence, it should be registered as a Singleton to avoid excessive resource allocation and ensure optimal performance.

Before Fix: Using Scoped Registration

Here’s an example of the problematic configuration:

public void ConfigureServices(IServiceCollection services)
{
    services.AddScoped(serviceProvider =>
    {
        string connectionString = Configuration.GetConnectionString("ServiceBus");
        return new ServiceBusClient(connectionString);
    });

    services.AddScoped<IMessageProcessor, MessageProcessor>();
}

In this configuration:

  • A new instance of ServiceBusClient is created for each HTTP request or scoped context.
  • This quickly leads to resource exhaustion, causing the “Cannot allocate more handles” error.

Solution: Switching to Singleton

To fix this, register the ServiceBusClient as a Singleton, ensuring a single instance is shared across the application lifetime:

public void ConfigureServices(IServiceCollection services)
{
    services.AddSingleton(serviceProvider =>
    {
        string connectionString = Configuration.GetConnectionString("ServiceBus");
        return new ServiceBusClient(connectionString);
    });

    services.AddScoped<IMessageProcessor, MessageProcessor>();
}

In this configuration:

  • A single instance of ServiceBusClient is created and reused for all requests.
  • Resource usage is optimized, and the exception is avoided.

Sample Code: Before and After

Before Fix (Scoped Registration)

public interface IMessageProcessor
{
    Task ProcessMessageAsync();
}

public class MessageProcessor : IMessageProcessor
{
    private readonly ServiceBusClient _client;

    public MessageProcessor(ServiceBusClient client)
    {
        _client = client;
    }

    public async Task ProcessMessageAsync()
    {
        ServiceBusReceiver receiver = _client.CreateReceiver("queue-name");
        var message = await receiver.ReceiveMessageAsync();
        Console.WriteLine($"Received message: {message.Body}");
        await receiver.CompleteMessageAsync(message);
    }
}

After Fix (Singleton Registration)

public void ConfigureServices(IServiceCollection services)
{
    // Singleton registration for ServiceBusClient
    services.AddSingleton(serviceProvider =>
    {
        string connectionString = Configuration.GetConnectionString("ServiceBus");
        return new ServiceBusClient(connectionString);
    });

    services.AddScoped<IMessageProcessor, MessageProcessor>();
}

public class MessageProcessor : IMessageProcessor
{
    private readonly ServiceBusClient _client;

    public MessageProcessor(ServiceBusClient client)
    {
        _client = client;
    }

    public async Task ProcessMessageAsync()
    {
        ServiceBusReceiver receiver = _client.CreateReceiver("queue-name");
        var message = await receiver.ReceiveMessageAsync();
        Console.WriteLine($"Received message: {message.Body}");
        await receiver.CompleteMessageAsync(message);
    }
}

Key Takeaways

  1. Always use Singleton scope for ServiceBusClient to optimize resource usage.
  2. Avoid using Scoped or Transient scope for long-lived, resource-heavy objects.
  3. Test your application under load to ensure no resource leakage occurs.

Sending Apple Push Notification for Live Activities Using .NET

In the evolving world of app development, ensuring real-time engagement with users is crucial. Apple Push Notification Service (APNs) enables developers to send notifications to iOS devices, and with the introduction of Live Activities in iOS, keeping users updated about ongoing tasks is easier than ever. This guide demonstrates how to use .NET to send Live Activity push notifications using APNs.

Prerequisites

Before diving into the code, ensure you have the following:

  1. Apple Developer Account with access to APNs.
  2. P8 Certificate downloaded from the Apple Developer Portal.
  3. Your Team ID, Key ID, and Bundle ID of the iOS application.
  4. .NET SDK installed on your system.

Overview of the Code

The provided ApnsService class encapsulates the logic to interact with APNs for sending push notifications, including Live Activities. Let’s break it down step-by-step:

1. Initializing APNs Service

The constructor sets up the base URI for APNs:

  • Use https://api.push.apple.com for production.
  • Use https://api.development.push.apple.com for the development environment.
_httpClient = new HttpClient { BaseAddress = new Uri("https://api.development.push.apple.com:443") };

2. Generating the JWT Token

APNs requires a JWT token for authentication. This token is generated using:

  • Team ID: Unique identifier for your Apple Developer account.
  • Key ID: Associated with the P8 certificate.
  • ES256 Algorithm: Uses the private key in the P8 certificate to sign the token.
private string GetProviderToken()
{
    double epochNow = (int)DateTime.UtcNow.Subtract(new DateTime(1970, 1, 1)).TotalSeconds;
    Dictionary<string, object> payload = new Dictionary<string, object>
    {
        { "iss", _teamId },
        { "iat", epochNow }
    };
    var extraHeaders = new Dictionary<string, object>
    {
        { "kid", _keyId },
        { "alg", "ES256" }
    };

    CngKey privateKey = GetPrivateKey();

    return JWT.Encode(payload, privateKey, JwsAlgorithm.ES256, extraHeaders);
}

3. Loading the Private Key

The private key is extracted from the .p8 file using BouncyCastle.

private CngKey GetPrivateKey()
{
    using (var reader = File.OpenText(_p8CertificateFileLocation))
    {
        ECPrivateKeyParameters ecPrivateKeyParameters = (ECPrivateKeyParameters)new PemReader(reader).ReadObject();
        var x = ecPrivateKeyParameters.Parameters.G.AffineXCoord.GetEncoded();
        var y = ecPrivateKeyParameters.Parameters.G.AffineYCoord.GetEncoded();
        var d = ecPrivateKeyParameters.D.ToByteArrayUnsigned();

        return EccKey.New(x, y, d);
    }
}

4. Sending the Notification

The SendApnsNotificationAsync method handles:

  • Building the request with headers and payload.
  • Adding apns-push-type as liveactivity for Live Activity notifications.
  • Adding a unique topic for Live Activities by appending .push-type.liveactivity to the Bundle ID.
public async Task SendApnsNotificationAsync<T>(string deviceToken, string pushType, T payload) where T : class
    {
        var jwtToken = GetProviderToken();
        var jsonPayload = JsonSerializer.Serialize(payload);
        // Prepare HTTP request
        var request = new HttpRequestMessage(HttpMethod.Post, $"/3/device/{deviceToken}")
        {
            Content = new StringContent(jsonPayload, Encoding.UTF8, "application/json")
        };
        request.Headers.Add("authorization", $"Bearer {jwtToken}");
        request.Headers.Add("apns-push-type", pushType);
        if (pushType == "liveactivity")
        {
            request.Headers.Add("apns-topic", _bundleId + ".push-type.liveactivity");
            request.Headers.Add("apns-priority", "10");
        }
        else
        {
            request.Headers.Add("apns-topic", _bundleId);
        }
        request.Version = new Version(2, 0);
        // Send the request
        var response = await _httpClient.SendAsync(request);
        if (response.IsSuccessStatusCode)
        {
            Console.WriteLine("Push notification sent successfully!");
        }
        else
        {
            var responseBody = await response.Content.ReadAsStringAsync();
            Console.WriteLine($"Failed to send push notification: {response.StatusCode} - {responseBody}");
        }
    }

Sample Usage

Here’s how you can use the ApnsService class to send a Live Activity notification:

var apnsService = new ApnsService();
 // Example device token (replace with a real one)
 var pushDeviceToken = "808f63xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
 // Create the payload for the Live Activity
 var notificationPayload = new PushNotification
 {
     Aps = new Aps
     {
         Timestamp = DateTimeOffset.UtcNow.ToUnixTimeSeconds(),
         Event = "update",
         ContentState = new ContentState
         {
             Status = "Charging",
             ChargeAmount = "65 Kw",
             DollarAmount = "$11.80",
             timeDuration = "00:28",
             Percentage = 80
         },
     }
 };
 await apnsService.SendApnsNotificationAsync(pushDeviceToken, "liveactivity", notificationPayload);

Key Points to Remember

  1. JWT Token Validity: Tokens expire after 1 hour. Ensure you regenerate tokens periodically.
  2. APNs Endpoint: Use the correct environment (production or development) based on your app stage.
  3. Error Handling: Handle HTTP responses carefully. Common issues include invalid tokens or expired certificates.

Debugging Tips

  • Ensure your device token is correct and valid.
  • Double-check your .p8 file, Team ID, Key ID, and Bundle ID.
  • Use tools like Postman to test your APNs requests independently.

Conclusion

Sending Live Activity push notifications using .NET involves integrating APNs with proper authentication and payload setup. The ApnsService class demonstrated here provides a robust starting point for developers looking to enhance user engagement with real-time updates.🚀

Mastering Feature Flag Management with Azure Feature Manager

In the dynamic realm of software development, the power to adapt and refine your application’s features in real-time is a game-changer. Azure Feature Manager emerges as a potent tool in this scenario, empowering developers to effortlessly toggle features on or off directly from the cloud. This comprehensive guide delves into how Azure Feature Manager can revolutionize your feature flag control, enabling seamless feature introduction, rollback capabilities, A/B testing, and tailored user experiences.

Introduction to Azure Feature Manager

Azure Feature Manager is a sophisticated component of Azure App Configuration. It offers a unified platform for managing feature flags across various environments and applications. Its capabilities extend to gradual feature rollouts, audience targeting, and seamless integration with Azure Active Directory for enhanced access control.

Step-by-Step Guide to Azure App Configuration Setup

Initiating your journey with Azure Feature Manager begins with setting up an Azure App Configuration store. Follow these steps for a smooth setup:

  1. Create Your Azure App Configuration: Navigate to the Azure portal and initiate a new Azure App Configuration resource. Fill in the required details and proceed with creation.
  2. Secure Your Access Keys: Post-creation, access the “Access keys” section under your resource settings to retrieve the connection strings, crucial for your application’s connection to the Azure App Configuration.

Crafting Feature Flags

To leverage feature flags in your application:

  1. Within the Azure App Configuration resource, click on “Feature Manager” and then “+ Add” to introduce a new feature flag.
  2. Identify Your Feature Flag: Name it thoughtfully, as this identifier will be used within your application to assess the flag’s status

Application Integration Essentials

Installing Required NuGet Packages

Your application necessitates specific packages for Azure integration:

  • Microsoft.Extensions.Configuration.AzureAppConfiguration
  • Microsoft.FeatureManagement.AspNetCore

These can be added via your IDE or through the command line in your project directory:

dotnet add package Microsoft.Extensions.Configuration.AzureAppConfiguration
dotnet add package Microsoft.FeatureManagement.AspNetCore

Application Configuration

Modify your appsettings.json to include your Azure App Configuration connection string:

{
  "ConnectionStrings": {
    "AppConfig": "Endpoint=https://<your-resource-name>.azconfig.io;Id=<id>;Secret=<secret>"
  }
}

Further, in Program.cs (or Startup.cs for earlier .NET versions), ensure your application is configured to utilize Azure App Configuration and activate feature management:

var builder = WebApplication.CreateBuilder(args);

builder.Configuration.AddAzureAppConfiguration(options =>
{
    options.Connect(builder.Configuration["ConnectionStrings:AppConfig"])
           .UseFeatureFlags();
});

builder.Services.AddFeatureManagement();

Implementing Feature Flags

To verify a feature flag’s status within your code:

using Microsoft.FeatureManagement;

public class FeatureService
{
    private readonly IFeatureManager _featureManager;

    public FeatureService(IFeatureManager featureManager)
    {
        _featureManager = featureManager;
    }

    public async Task<bool> IsFeatureActive(string featureName)
    {
        return await _featureManager.IsEnabledAsync(featureName);
    }
}

Advanced Implementation: Custom Targeting Filter

Go to Azure and modify your feature flag

Make sure the “Default Percentage” is set to 0 and in this scenario we want to target specific user based on its email address

For user or group-specific targeting, We need to implement ITargetingContextAccessor. In below example we target based on its email address where the email address comes from JWT

using Microsoft.FeatureManagement.FeatureFilters;
using System.Security.Claims;

namespace SampleApp
{
    public class B2CTargetingContextAccessor : ITargetingContextAccessor
    {
        private const string TargetingContextLookup = "B2CTargetingContextAccessor.TargetingContext";
        private readonly IHttpContextAccessor _httpContextAccessor;

        public B2CTargetingContextAccessor(IHttpContextAccessor httpContextAccessor)
        {
            _httpContextAccessor = httpContextAccessor;
        }

        public ValueTask<TargetingContext> GetContextAsync()
        {
            HttpContext httpContext = _httpContextAccessor.HttpContext;

            //
            // Try cache lookup
            if (httpContext.Items.TryGetValue(TargetingContextLookup, out object value))
            {
                return new ValueTask<TargetingContext>((TargetingContext)value);
            }

            ClaimsPrincipal user = httpContext.User;

            //
            // Build targeting context based off user info
            TargetingContext targetingContext = new TargetingContext
            {
                UserId = user.FindFirst("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress")?.Value,
                Groups = new string[] { }
            };

            //
            // Cache for subsequent lookup
            httpContext.Items[TargetingContextLookup] = targetingContext;

            return new ValueTask<TargetingContext>(targetingContext);
        }
    }
}

in Program.cs (or Startup.cs for earlier .NET versions), modify your Feature Management to use targeting filter

    builder.Services.AddFeatureManagement().WithTargeting<B2CTargetingContextAccessor>();

You also need to pass the targeting context to the feature manager

using Microsoft.FeatureManagement;

public class FeatureService
{
    private readonly IFeatureManager _featureManager;
    private readonly ITargetingContextAccessor _targetContextAccessor;

    public FeatureService(IFeatureManager featureManager, ITargetingContextAccessor targetingContextAccessor)
    {
        _featureManager = featureManager;
_targetContextAccessor = targetingContextAccessor;
    }

    public async Task<bool> IsFeatureActive()
    {
        return await _featureManager.IsEnabledAsync("UseLocationWebhook", _targetContextAccessor);
    }
}

Logging in .NET – Elastic Search, Kibana and Serilog

I’ve been using log4net in the past and I found it quite useful as it is ready to use out of the box. In my last workplace, we are using SPLUNK and its amazing as I’m able to troubleshoot production issue by looking at the trend and activities. You can do query based and filtering the log and build a pretty dashboard. Downside of it is the cost for Splunk is expensive (i don’t think its for the mainstream user or small business)

So I’ve found another logging mechanism/storage/tool which is amazing!! It is called Elastic Search and its open source (well there are different subscriptions level for better supports). Pretty much Elastic Search is the engine for search and analytics

How about the GUI/Dashboard?Yes you can use Kibana. It is an open source data visualization platform that allows you to interact with data

Ok, so lets say if I have a .NET, how do I write my log to Elastic Search?You can use SeriLog. It will allow you to log structured event data to your log and you can use Serilog Elastic Search sink to integrate with Elastic Search

Serilog has different sink providers that allow you to store your log externally (beside file) including SPLUNK

I will talk more about Serilog separately in different post, stay tune!

Automapper – Dynamic and Generic Mapping

In Automapper, we normally have 1 to 1 mapping defined but I have a case whereas the incoming stream as a json payload which then I cast it as a dynamic (using JObject parse) and in one of the property within the payload it defined which object that it needs to cast into. Lets take a look at the sample below

Input
Json payload to create a city

[code language=”javascript”]
{
"requestId": "C4910016-C30D-415C-89D3-D08D724429A6",
"messageType": "CITY_CREATED",
"categoryName": "categoryA",
"metadata": {
"city": "sydney",
"state": "NSW",
"postcode": "2000",
"country": "australia"
}
}
[/code]

at the same time we can also have a Json payload to create a staff

[code language=”javascript”]
{
"requestId":"C4910016-C30D-415C-89D3-D08D724429A6",
"messageType": "STAFF_CREATED",
"categoryName": "categoryB",
"staffDetail": {
"name": "fransiscus",
"dateOfBirth": "01/01/1950"
},
"location" : {
"cityId" : "1"
}
}
[/code]

So what we are doing in here, all the message will go into payload property (it can contain any object) and we add some extra information/header/metadata on the parent level
Desired Outputs

[code language=”javascript”]
{
"messageType": "CITY_CREATED",
"payload": {
"city": "sydney",
"state": "NSW",
"postcode": "2000",
"country": "australia"
},
"provider": "abc",
"providerRequestId": "C4910016-C30D-415C-89D3-D08D724429A6",
"receivedAt": "2015-09-30T23:53:58.6118521Z",
"lastUpdated": "2015-09-30T23:53:58.6128283Z",
"lastUpdater": "Transformer",
"attempt": 0
}
[/code]

[code language=”javascript”]
{
"messageType": "STAFF_CREATED",
"payload": {
"staffName": "fransiscus",
"dateOfBirth": "01/01/1950",
"cityId": "1"
},
"provider": "abc",
"providerRequestId": "C4910016-C30D-415C-89D3-D08D724429A6",
"receivedAt": "2015-09-30T23:53:58.6118521Z",
"lastUpdated": "2015-09-30T23:53:58.6128283Z",
"lastUpdater": "Transformer",
"attempt": 0
}
[/code]

To map this to a concrete class 1:1 mapping is straight forward and easy. The problem here is that the “messageType” is the one that decided which object that it should be

Automapper Configuration:

1. POCO object

abstract class that stores all the metadata

[code language=”csharp”]
public abstract class Metadata
{
public string MessageType { get; set; }

public string Provider { get; set; }

public string ProviderRequestId { get; set; }

public DateTime ReceivedAt { get; set; }

public DateTime LastUpdated { get; set; }

public string LastUpdater { get; set; }

public int Attempt { get; set; }

public List<string> Errors { get; set; }
}
[/code]

[code language=”csharp”]
public class City
{
public string CityName { get; set; }
public string State { get; set; }
public string PostCode { get; set; }
public string Country { get; set; }
}
[/code]

[code language=”csharp”]
public class StaffDetail
{
public string Name { get; set; }
public string DateOfBirth { get; set; }
public int CityId { get; set; }
}
[/code]

[code language=”csharp”]
public class Message<T> : Metadata where T : class
{
public T Payload { get; set; }
}
[/code]

2. Lets create a TypeConverter for the base class which is Metadata and from this converter it will return the derived class

[code language=”csharp”]
public class MetadataTypeConverter : TypeConverter<dynamic, Metadata>
{
protected override Metadata ConvertCore(dynamic source)
{
Metadata metadata;

var type = (string)source.messageType.Value;

switch (type)
{
case "STAFF_CREATED":
metadata = new Message<StaffDetail> { Payload = Mapper.Map<dynamic, StaffDetail>(source) };
break;
case "CITY_CREATED":
metadata = new Message<City> { Payload = Mapper.Map<dynamic, City>(source) };
break;

default: throw new Exception(string.Format("no mapping defined for {0}", source.messageType.Value));
}

metadata.ProviderRequestId = source.requestId;
metadata.Topic = string.Format("{0}.{1}.pregame",
producerTopicName,
source.categoryName ?? source.competition.categoryName);
metadata.Provider = "My Provider";
metadata.MessageType = source.messageType;
metadata.ReceivedAt = DateTime.UtcNow;
metadata.LastUpdated = DateTime.UtcNow;
metadata.LastUpdater = "Transformer";
metadata.Attempt = 0;

return metadata;
}
}
[/code]

3. Lets create a TypeConverter for the derived class which are Staff and City

[code language=”csharp”]
public class CityTypeConverter : TypeConverter<dynamic, City>
{
protected override City ConvertCore(dynamic source)
{
City city = new City();
city.CityName = source.metadata.city;
city.State = source.metadata.state;
city.Postcode = source.metadata.postcode;
city.Country = source.metadata.country;

return city;
}
}
[/code]

[code language=”csharp”]
public class StaffDetailTypeConverter : TypeConverter<dynamic, StaffDetail>
{
protected override StaffDetail ConvertCore(dynamic source)
{
StaffDetail staffdetail = new StaffDetail();
staffdetail.Name = source.staffDetail.name;
staffdetail.DateOfBirth = source.staffDetail.dateOfBirth;
staffdetail.CityId = source.location.cityId;

return staffdetail;
}
}
[/code]

3. Define the Automapper mapping in the configuration

[code language=”csharp”]
public class WhafflMessageMapping : Profile
{
public override string ProfileName
{
get
{
return this.GetType().Name;
}
}

protected override void Configure()
{
this.CreateMap()
.ConvertUsing(new MetadataTypeConverter());

this.CreateMap()
.ConvertUsing(new StaffDetailTypeConverter());

this.CreateMap()
.ConvertUsing(new CityTypeConverter());
}

private Metadata BuildWhafflMessage(dynamic context)
{
var type = ((string)context.messageType.Value);

switch (type)
{
case "STAFF_CREATED":
return new Message { Payload = Mapper.Map(context) };
case "CITY_CREATED:
return new Message { Payload = Mapper.Map(context) };

default: throw new Exception(string.Format("no mapping defined for {0}", context.messageType.Value));
}

}
}
[/code]

Fusion Log – Assembly Logging

Another helpful debugging tool that can be used is FusionLog – it is already installed in your machine by default and what this tool does is basically logging and telling us where the assembly is loaded from either local or GAC or some other location and at the same time it tells you it couldn’t locate the assembly

-First create a folder called “FusionLog” on C drive or any location with any name

-Open your Regedit to add the key below

HKEY_LOCAL_MACHINESOFTWAREMicrosoftFusion

Add:

DWORD ForceLog set value to 1

DWORD LogFailures set value to 1

DWORD LogResourceBinds set value to 1

String LogPath set value to folder for logs (e.g. C:FusionLog)

Make sure you include the backslash after the folder name and that the Folder exists.

-Restart your computer

-Run your application

-Look the assembly name from c:fusionlog

-Open the file and it will tell you where the assembly is loaded from

Page 1 of 5

Powered by WordPress & Theme by Anders Norén