Technical Insights: Azure, .NET, Dynamics 365 & EV Charging Architecture

Author: fransiscuss Page 1 of 19

Explore Key Features of C# 14 for Developers

Comprehensive Analysis of C# 14: Key Features and Enhancements

  • C# 14 introduces significant features for enhancing developer productivity and performance.
  • Key enhancements include implicit span conversions, extended `nameof` capabilities, and lambda expression improvements.
  • New features like the contextual `field` keyword and partial constructors promote modular design and cleaner code.
  • User-defined compound assignment operators and dictionary expressions improve performance and simplify code.
  • C# 14 focuses on memory safety, streamlined syntax, and community-driven enhancements.

Table of Contents

Enhanced Span Support for Memory Optimization

One of the standout features of C# 14 is the first-class support for System.Span<T> and System.ReadOnlySpan<T>, which is indicative of a broader emphasis on memory safety and performance optimization in high-efficiency scenarios such as real-time data processing and resource-constrained environments. The introduction of implicit conversions between spans and arrays significantly simplifies the handling of memory-intensive operations, allowing developers to manage memory more effectively without incurring the overhead associated with manual marshaling.

For instance, when converting a string array to a ReadOnlySpan<string>, C# 14 allows a seamless assignment:

string[] words = { "Hello", "World" };
ReadOnlySpan<string> span = words; // Implicit conversion

This change leverages runtime optimizations to minimize heap allocations, thereby making spans ideal for performance-critical applications, such as game development or Internet of Things (IoT) scenarios where every byte of memory counts. Furthermore, as the compiler has been enhanced to recognize span relationships natively, developers can now utilize spans as extension receivers and benefit from improved generic type inference, streamlining their development experience.

Extended `nameof` Capabilities for Reflection

In an increasingly complex programming landscape, understanding type names dynamically becomes essential. C# 14 enhances the capabilities of the nameof operator by enabling it to resolve unbound generic type names. This represents a significant simplification over previous versions, where types like List<int> would return cumbersome names such as “List`1”. With the new feature, invoking nameof(List)<> cleanly returns “List”.

This enhancement is particularly beneficial in the context of frameworks that rely heavily on reflection, such as serialization libraries and dependency injection containers. For example, when building error messages for generic repositories, the use of nameof can greatly improve maintainability and readability:

throw new InvalidOperationException($"{nameof(IRepository<>)} requires implementation.");

By reducing the clutter in logs caused by arity notation, developers can focus on more meaningful output, significantly improving debugging efforts and enhancing the overall developer experience. Feedback from the C# community has been instrumental in shaping this capability, as developers sought to minimize string literals in reflection-heavy code bases.

Lambda Expressions with Parameter Modifiers

C# 14 brings scalability and clarity to lambda expressions by allowing them to incorporate parameter modifiers such as ref, in, out, scoped, and ref readonly, all without needing to specify parameter types explicitly. Prior to this enhancement, developers often faced cumbersome syntax when defining output parameters, which detracted from code readability and conciseness.

The following example illustrates how this feature simplifies lambda expressions:

delegate bool TryParse<T>(string text, out T result);
TryParse<int> parse = (string s, out int result) => int.TryParse(s, out result);

This can now be rewritten more cleanly in C# 14 as:

TryParse<int> parse = (s, out result) => int.TryParse(s, out result);

The absence of explicit type annotations improves the fluency of the code, making it easier to read and write and aligning with existing lambda functionality. However, it is essential to note that modifiers like params still require explicit typing due to compiler constraints. This enhancement particularly benefits low-level interoperability scenarios where output parameters are frequently utilized, as it reduces boilerplate code and fosters a more fluid coding experience.

Field Keyword in Auto-Implemented Properties

C# 14 introduces the contextual field keyword, which greatly streamlines the handling of auto-implemented properties by granting direct access to compiler-generated backing fields. This improvement is particularly notable in scenarios requiring null validation or other property logic, which traditionally necessitated verbose manual backing field management.

Consider this example from prior versions:

private string _message;

public string Message
{
    get => _message;
    set => _message = value ?? throw new ArgumentNullException(nameof(value));
}

With C# 14, developers can eliminate redundancy by utilizing the new field keyword:

public string Message
{
    get;
    set => field = value ?? throw new ArgumentNullException(nameof(value));
}

Here, field acts as a placeholder for the implicit backing field, enhancing readability and maintainability while preserving encapsulation principles. However, users must remain mindful of potential symbol collisions, as using field as an identifier within class members requires disambiguation (e.g., @field or this.field).

This change not only aids in reducing boilerplate but also encourages more concise property implementations, ultimately resulting in cleaner, more maintainable code across projects.

Partial Events and Constructors for Modular Design

With the expanding complexity of software architectures, C# 14 introduces partial events and constructors, which enhance code modularity and facilitate a more organized approach to large codebases. By allowing event and constructor definitions to be split across multiple files, developers can structure their code more flexibly and responsively.

For instance, when defining a logger class, developers can now separate the event declaration and implementation:

// File1.cs
public partial class Logger
{
    public partial event Action<string>? LogEvent;
}

// File2.cs
public partial class Logger
{
    public partial event Action<string>? LogEvent
    {
        add => Subscribe(value);
        remove => Unsubscribe(value);
    }
}

This flexibility extends to partial constructors as well, enabling developers to distribute initializer logic across different files. While only one declaration can employ primary constructor syntax (e.g., Logger(string source)), this capability fosters enhanced collaboration and better organization within teams working on large-scale applications or utilizing code generation tools.

The implications of this feature are significant for source generation and modern architecture patterns, where the separation of concerns and maintainability are paramount. By allowing tool-generated code to inject validation and initialization logic into user-defined constructors, this enhancement streamlines workflows and supports the continuous evolution of application architectures.

Extension Members for Augmenting Types

As a preview feature, C# 14 will revolutionize the way developers can augment existing types through extension members, allowing the addition of properties and static members alongside the traditional extension methods. This capability leads to a more intuitive and discoverable syntax, particularly beneficial for extending closed-source or interface-based types without necessitating complex inheritance setups.

For example, adding an IsEmpty property to the IEnumerable<T> interface can now be accomplished straightforwardly:

public extension EnumerableExtensions for IEnumerable<T>
{
    public bool IsEmpty => !this.Any();
}

This syntax not only enhances clarity but also promotes code reuse and modularity:

if (strings.IsEmpty) return;

In addition, static extension members bolster usability and flexibility when dealing with types that developers cannot directly modify. The implications for team projects and libraries are substantial, as this feature allows for richer connectivity across application codebases while preserving the integrity of existing types.

The extension member functionality is part of an ongoing effort to make C# more expressive and adaptable, fulfilling developers’ needs for extended functionality while maintaining core principles of object-oriented programming. As this feature matures, developers can look forward to an enriched language experience that aligns more closely with modern programming paradigms.

Null-Conditional Assignment for Safe Mutation

Null safety continues to be a core concern in modern development, and C# 14 introduces a compelling enhancement to null-conditional operators: they can now be utilized on assignment targets. This evolution allows for more concise syntax and safer code execution, as the language can now intelligently bypass assignments for null objects without requiring explicit null checks.

For example, prior to C# 14, developers would write:

if (customer != null) customer.Order = GetOrder();

With the introduction of null-conditional assignment, this logic simplifies to:

customer?.Order = GetOrder();

In this case, if customer is null, the assignment of Order is gracefully skipped, significantly reducing overhead for conditional checks. This also applies to indexed assignments, as shown in the following example:

dict?["key"] = value; // Assigns only if dict is non-null

While these enhancements integrate seamlessly into existing null-coalescing patterns, it is crucial to note the limitations; for instance, chaining assignments (e.g., obj?.A?.B = value) will result in compile-time failures if any intermediary references are null. Nonetheless, this feature represents a significant step forward in safeguarding against null reference exceptions, enhancing the overall reliability of C# applications.

User-Defined Compound Assignment Operators

One of the most innovative features in C# 14 is the ability for developers to overload compound assignment operators such as += and -=. This grants developers the ability to optimize performance during mutation operations by directly altering existing objects rather than creating new instances, which is especially beneficial in high-efficiency contexts like mathematical computations.

For instance, a matrix class could utilize user-defined compound operators as follows:

public class Matrix
{
    public double[,] Values; // Matrix values
    
    public void operator +=(Matrix other)
    {
        for (int i = 0; i < Rows; i++)
            for (int j = 0; j < Cols; j++)
                Values[i, j] += other.Values[i, j];
    }
}

This syntax supports in-place mutations, avoiding the need for redundant memory allocations, which can be critical when dealing with large data structures. Notably, the operator must adhere to specific constraints, returning void and omitting static modifiers, due to its in-place nature, enforcing consistency with language rules to prevent unexpected behavior.

Through the strategic utilization of user-defined compound assignment operators, developers can achieve significant performance gains, with benchmarks indicating up to 40% fewer allocations in computation-intensive workloads. This capability empowers high-performance applications to operate seamlessly under heavy load, enhancing the robustness of numerical algorithms and data processing workflows.

Dictionary Expressions and Collection Enhancements

While still in development, C# 14 introduces the concept of dictionary expressions, poised to revolutionize how developers initialize dictionaries. This feature aims to provide an intuitive syntax akin to other collection initializers, allowing for cleaner and more concise code:

Dictionary<string, int> ages = ["Alice": 30, "Bob": 35];

This syntax reduces typing overhead and enhances readability compared to traditional dictionary initialization methods. Additionally, simultaneous enhancements to collection expressions allow for optimized initialization of collections, enabling more efficient operations during startup phases.

For example, using collection expressions like [1, 2, ..existing] can lead to improved startup performance due to internal optimizations that minimize individual Add calls. These enhancements collectively serve to streamline the coding experience, enabling developers to focus on core logic rather than boilerplate initialization code and improving the overall performance of applications.

Compiler Breaking Changes and Adoption Guidance

With any significant language update, developers must navigate breaking changes to ensure smooth transitions to new features. C# 14 introduces specific alterations that warrant careful attention. One notable change is the treatment of the scoped modifier in lambda expressions, which has transitioned into a reserved keyword. This shift necessitates the use of the @ sign for identifiers previously named scoped:

var v = (scoped s) => { ... }; // Error: 'scoped' is reserved

In this case, developers should use @scoped if they need to reference that identifier.

Moreover, the new implicit span conversions may introduce ambiguities in overload resolution, especially in scenarios involving method overloading between Span<T> and standard arrays. To mitigate this risk, developers should employ explicit casting to .AsSpan() or utilize the OverloadResolutionPriorityAttribute to guide the compiler on intended overload selections.

To ensure a successful transition, developers are advised to conduct thorough testing with the .NET 10 SDK and address any warnings or breaking changes using #pragma directives or by carefully managing type disambiguations. This proactive approach will facilitate embracing C# 14’s enhancements while maintaining robust codebases.

Conclusion: Strategic Impact and Future Directions

In summary, C# 14 embodies a substantial leap forward in refining the C# language, equipping developers with enhanced language ergonomics and performance-oriented features. The focus on implicit spans, improved null handling, and the introduction of the contextual field keyword significantly aligns with modern development paradigms that prioritize memory safety and streamlined syntax.

Developers should consider incorporating recommendations such as adopting the field keyword to reduce boilerplate in property handling, leveraging partial events and constructors in extensive codebases, and conducting audits on compound operators within numerical computation libraries to uncover allocation hotspots.

Looking ahead, as the ecosystem surrounding C# evolves, further iterations may finalize features like dictionary expressions and expand support for both static and instance extension members. As teams stabilize their tooling around .NET 10, placing a priority on these enhancements will empower their applications to excel within a rapidly advancing technological landscape. Emphasizing a balance between preview features and production stability will be crucial as organizations seek to capitalize on the opportunities presented by C# 14 and beyond.

FAQs

What are the main enhancements in C# 14?
C# 14 introduces significant improvements like implicit span conversions, extended capabilities for the nameof operator, and enhancements to lambda expressions, among others, aimed at improving developer productivity and code quality.

How does C# 14 improve memory management?
With the first-class support for System.Span<T> and the introduction of null-conditional assignment, C# 14 optimizes memory handling by reducing heap allocations and simplifying null checks.

What should developers be cautious about with breaking changes?
Developers need to navigate changes such as the reserved status of the scoped modifier and potential ambiguities with implicit span conversions to ensure smooth transitions to C# 14.

Understanding COM Components in C# for Interoperability

COM Components in C#: Enabling Interoperability in .NET Applications

  • Understanding the Component Object Model (COM) is essential for seamless technology integration.
  • C# provides robust interop features to expose and consume COM components effortlessly.
  • Proper resource management in COM is crucial for ensuring efficient memory usage.
  • Integrating AI and automation can significantly enhance COM component functionality.
  • Managing visibility and threading models is key to successful COM implementations.

Table of Contents

What is COM?

The Component Object Model (COM), developed by Microsoft, is a binary software standard that facilitates inter-process communication and enables dynamic object creation across different programming languages. Its core goal is to provide a flexible and reusable approach to component development by defining a standard interaction mechanism:

  • Language-Agnostic: COM is not tied to any programming language, allowing components to be created and consumed in various languages, thus promoting wider interoperability.
  • Object-Oriented: COM components are organized around object-oriented principles, allowing for encapsulation, inheritance, and polymorphism.

COM is vital for various Microsoft technologies such as Object Linking and Embedding (OLE), ActiveX, and COM+.

Key Concepts of COM

1. COM Interfaces and Objects

In COM, interfaces form the backbone of interaction between clients and components. Each interface comprises a collection of abstract operations that promote loose coupling. The base interface, IUnknown, supports fundamental methods, including reference counting and interface querying via the QueryInterface mechanism. Each COM interface is uniquely identified by a UUID (Universally Unique Identifier), ensuring that clients interact with the correct versions of COM objects.

2. COM in C#: Interoperability

C# provides robust support for consuming and exposing COM components through interop features. This allows for seamless interaction between native COM components and managed .NET code.

Exposing a C# class to COM requires several steps:

  1. Declare Public Interface: Define a public interface that lists the methods and properties that will be accessible to COM clients.
  2. Use COM Attributes: Apply attributes like [ComVisible(true)] and [Guid("...")] to mark classes and interfaces for import into the COM system.
  3. Register Assembly: Set your assembly to “Register for COM Interop” in project properties, allowing it to register with the Windows registry.

Example Implementation

Let’s explore a simple example of how to create a COM component in C#. Here, we will create a basic calculator that can be accessed via COM.

Step 1: Define the Interface

using System.Runtime.InteropServices;

namespace CalculatorCOM
{
    [Guid("12345678-abcd-efgh-ijkl-123456789012")]
    [ComVisible(true)]
    public interface ICalculator
    {
        double Add(double a, double b);
        double Subtract(double a, double b);
    }
}

Step 2: Implement the Interface

using System.Runtime.InteropServices;

namespace CalculatorCOM
{
    [Guid("87654321-lkjh-gfed-cba-210987654321")]
    [ComVisible(true)]
    public class Calculator : ICalculator
    {
        public double Add(double a, double b) => a + b;

        public double Subtract(double a, double b) => a - b;
    }
}

Step 3: Register for COM Interop

In your project properties, check the “Register for COM Interop” option. After building the project, the COM component will be available for use in any COM-compatible environments.

Managing COM Lifetime and Activation

COM components are not statically linked; they are activated on-demand at runtime. Clients can create instances of COM objects using system APIs like CoGetClassObject and CreateInstance. Proper resource management relies on the explicit release of object references to ensure that memory and resources are correctly freed.

Activation Example

Below is a simple C# client code demonstrating how to use the Calculator COM object:

class Program
{
    static void Main()
    {
        Type calculatorType = Type.GetTypeFromProgID("CalculatorCOM.Calculator");
        dynamic calculator = Activator.CreateInstance(calculatorType);
        
        double resultAdd = calculator.Add(5.0, 10.0);
        double resultSubtract = calculator.Subtract(15.0, 5.0);
        
        Console.WriteLine($"Addition Result: {resultAdd}");
        Console.WriteLine($"Subtraction Result: {resultSubtract}");
    }
}

Common Pitfalls and Best Practices

  • Visibility: Only public members in the interface are visible to COM clients. Members defined in the class, but not the interface, will remain hidden from COM consumers.
  • Multiple Interfaces: A class can implement multiple interfaces. The first interface marked in the class definition is treated as the default interface for COM.
  • Threading Models: Be aware of the threading models used by COM. Ensure that your components are safe in multi-threaded contexts, particularly if accessed across threads.

Integrating AI and Automation

In the rapidly evolving tech landscape, integrating AI and automation into COM components can enhance their functionality. For example, using OpenAI’s models, you could develop intelligent components that provide insights or automate complex workflows. This would not only modernize legacy systems but also increase their value and usability in contemporary applications.

Conclusion

COM components in C# present powerful opportunities for cross-language and cross-process communication. Understanding their structure, implementation, and the critical role they play in interoperability can significantly enhance your software solutions. By exposing .NET classes to COM, you can unlock the potential of legacy systems while positioning your applications for future innovation.

For further implementation examples and insights, feel free to explore my GitHub.

Also, connect with me on LinkedIn, where I share additional resources on software architecture and engineering practices.

FAQ

Understanding the Mediator Pattern: Simplifying Communication in C# with MediatR

  • Streamlined Communication: Centralizes interactions between components for clarity.
  • Decoupling: Reduces tight coupling, enhancing maintainability.
  • MediatR Integration: A robust tool for implementing the Mediator Pattern in C#.
  • Layered Architecture: Promotes separation of concerns for scalable systems.
  • Reusability: Enables components to be reused across contexts.

Table of Contents

What is the Mediator Pattern?

The Mediator Pattern is a behavioral design pattern that promotes loose coupling by centralizing communication between objects. Instead of objects interacting directly, they communicate through a mediator, which encapsulates interaction logic. This reduces dependencies, making systems easier to maintain and extend.

Problems Addressed by the Mediator Pattern

  • Tight Coupling: Direct object interactions create complex, interdependent codebases. The Mediator Pattern eliminates this by routing communication through a single point.
  • Complex Maintenance: Centralized communication simplifies debugging and updating interaction logic.
  • Scalability Issues: Decoupled components are easier to modify or replace, supporting system growth.
  • Reusability: Independent components can be reused in different contexts without modification.

Implementing the Mediator Pattern in C# with MediatR

In .NET, MediatR is a lightweight library that simplifies the Mediator Pattern, often used with Command Query Responsibility Segregation (CQRS). It enables clean separation of concerns by handling requests (commands or queries) through mediators and their handlers.

Steps to Use MediatR

  1. Install MediatR: Add the MediatR and MediatR.Extensions.Microsoft.DependencyInjection packages via NuGet.
  2. Define Requests and Handlers: Create request classes (commands or queries) and their corresponding handlers to process them.
  3. Configure Dependency Injection: Register MediatR services in your application’s dependency injection container.
  4. Dispatch Requests: Use the IMediator interface to send requests from controllers or services.

Sample Code and Explanation

Below is a practical example of using MediatR in a C# application to handle a user registration process.

Sample Code

// 1. Install MediatR packages
// Run in your project: 
// dotnet add package MediatR
// dotnet add package MediatR.Extensions.Microsoft.DependencyInjection

using MediatR;
using Microsoft.Extensions.DependencyInjection;
using System;
using System.Threading;
using System.Threading.Tasks;

// 2. Define a Command (Request)
public class RegisterUserCommand : IRequest<User>
{
    public string Username { get; set; }
    public string Email { get; set; }
}

// 3. Define the Command Handler
public class RegisterUserCommandHandler : IRequestHandler<RegisterUserCommand, User>
{
    public Task<User> Handle(RegisterUserCommand request, CancellationToken cancellationToken)
    {
        // Simulate user registration logic (e.g., save to database)
        var user = new User
        {
            Id = Guid.NewGuid(),
            Username = request.Username,
            Email = request.Email
        };
        Console.WriteLine($"User {user.Username} registered with email {user.Email}");
        return Task.FromResult(user);
    }
}

// 4. User Model
public class User
{
    public Guid Id { get; set; }
    public string Username { get; set; }
    public string Email { get; set; }
}

// 5. Program Setup and Execution
public class Program
{
    public static async Task Main()
    {
        // Configure Dependency Injection
        var services = new ServiceCollection();
        services.AddMediatR(cfg => cfg.RegisterServicesFromAssembly(typeof(Program).Assembly));
        var serviceProvider = services.BuildServiceProvider();

        // Resolve IMediator
        var mediator = serviceProvider.GetService<IMediator>();

        // Create and send a command
        var command = new RegisterUserCommand
        {
            Username = "john_doe",
            Email = "john@example.com"
        };

        var registeredUser = await mediator.Send(command);
        Console.WriteLine($"Registered User ID: {registeredUser.Id}");
    }
}

Explanation

  • Command Definition: RegisterUserCommand represents the action (registering a user) and implements IRequest<User>, indicating it returns a User object.
  • Handler Logic: RegisterUserCommandHandler processes the command, simulating user registration. In a real application, this might involve database operations.
  • Dependency Injection: MediatR is registered in the DI container, allowing IMediator to route requests to the correct handler.
  • Request Dispatch: The IMediator.Send method sends the command to its handler, keeping the calling code decoupled from the handler’s implementation.

Layered Architecture Explained

A layered architecture organizes a .NET application into distinct layers, each with specific responsibilities, enhancing maintainability and scalability.

Layer Description Typical Contents
Domain Layer Holds core business logic, entities, and domain services. Entities, value objects, domain events
Application Layer Orchestrates business use cases, mediates between domain and external layers. MediatR commands/queries, handlers, application services
Infrastructure Layer Manages technical concerns like database access and external integrations. Repositories, EF Core contexts, API clients
Presentation Layer Handles client interactions, exposing endpoints or UI. Controllers, Razor pages, minimal APIs

MediatR in Layered Architecture

  • Domain Layer: Contains pure business logic, unaware of MediatR.
  • Application Layer: Hosts MediatR commands, queries, and handlers, orchestrating business logic.
  • Infrastructure Layer: Provides services (e.g., repositories) used by handlers.
  • Presentation Layer: Sends requests via IMediator, typically from controllers.

Request Flow Example

  1. A client sends a REST request to the presentation layer (e.g., a POST to /api/users).
  2. The controller creates a command (e.g., RegisterUserCommand) and dispatches it via IMediator.
  3. MediatR routes the command to its handler in the application layer.
  4. The handler collaborates with domain entities and infrastructure services (e.g., a repository).
  5. The result is returned to the controller and sent to the client.

Conclusion

The Mediator Pattern, implemented via MediatR in C#, simplifies application architecture by decoupling components and centralizing communication. This leads to cleaner, more maintainable, and scalable codebases, especially when combined with CQRS and layered architecture. By adopting these practices, developers can build robust .NET applications that are easier to extend and test.

FAQ

  • What is the Mediator Pattern? A behavioral pattern that centralizes communication between objects, reducing direct dependencies.
  • How does MediatR improve application architecture? It decouples request handling, supports CQRS, and integrates seamlessly with dependency injection.
  • Can the Mediator Pattern be used in other languages? Yes, it’s language-agnostic and widely used in languages like Java, Python, and JavaScript.
  • What are real-world applications of the Mediator Pattern? It’s used in chat applications, event-driven systems, and microservices to manage complex interactions.

Connect with me on LinkedIn or check out my GitHub for more examples and discussions on software architecture!

Mastering SOLID Principles in C# Development

SOLID Pattern Object Oriented Design and How to Use It in C#

  • Enhances maintainability and scalability of applications.
  • Guides developers in crafting robust software systems.
  • Encourages extensible software architectures.
  • Improves reliability and promotes clean design.
  • Facilitates easier testing and mocking through abstraction.

Table of Contents

Understanding SOLID Principles

The SOLID acronym comprises five principles:

  1. Single Responsibility Principle (SRP)
  2. Open/Closed Principle (OCP)
  3. Liskov Substitution Principle (LSP)
  4. Interface Segregation Principle (ISP)
  5. Dependency Inversion Principle (DIP)

While these principles are applicable across various programming languages, they align exceptionally well with C# due to its robust type system and object-oriented capabilities. Let’s delve into each principle in detail.

Single Responsibility Principle (SRP)

Definition: A class should have only one reason to change, meaning it should only have one job or responsibility.

Implementation in C#:

Consider the following implementation where a class violates SRP by performing multiple roles:


// Bad example - multiple responsibilities
public class UserService
{
    public void RegisterUser(string email, string password)
    {
        // Register user logic
        // Send email logic
        // Log activity
    }
}

In contrast, adhering to the Single Responsibility Principle leads to a more maintainable structure:


// Better example - single responsibility
public class UserRegistration
{
    private readonly EmailService _emailService;
    private readonly LoggingService _loggingService;
    
    public UserRegistration(EmailService emailService, LoggingService loggingService)
    {
        _emailService = emailService;
        _loggingService = loggingService;
    }
    
    public void RegisterUser(string email, string password)
    {
        // Only handle user registration
        var user = new User(email, password);
        SaveUserToDatabase(user);
        
        _emailService.SendWelcomeEmail(email);
        _loggingService.LogActivity("User registered: " + email);
    }
}

Benefits of SRP:

  • Improved maintainability as each class has a distinct responsibility.
  • Easier collaboration; team members can work on separate functionalities with minimal overlap.

Open/Closed Principle (OCP)

Definition: Software entities should be open for extension but closed for modification.

Implementation in C#:

Let’s assess a traditional approach that violates the OCP:


// Bad approach
public class AreaCalculator
{
    public double CalculateArea(object shape)
    {
        if (shape is Rectangle rectangle)
            return rectangle.Width * rectangle.Height;
        else if (shape is Circle circle)
            return Math.PI * circle.Radius * circle.Radius;
        
        throw new NotSupportedException("Shape not supported");
    }
}

By implementing the OCP, we can extend functionality without altering existing code:


// Better approach using OCP
public interface IShape
{
    double CalculateArea();
}

public class Rectangle : IShape
{
    public double Width { get; set; }
    public double Height { get; set; }
    
    public double CalculateArea()
    {
        return Width * Height;
    }
}

public class Circle : IShape
{
    public double Radius { get; set; }
    
    public double CalculateArea()
    {
        return Math.PI * Radius * Radius;
    }
}

// Now we can add new shapes without modifying existing code

Benefits of OCP:

  • Encourages the development of extensible software architectures.
  • Reduces the risk of introducing bugs to existing functionalities.

Liskov Substitution Principle (LSP)

Definition: Objects of a superclass should be replaceable with objects of its subclasses without affecting the correctness of the program.

Implementation in C#:

Let’s critique this implementation which violates LSP:


// Violation of LSP
public class Rectangle
{
    public virtual int Width { get; set; }
    public virtual int Height { get; set; }
    
    public virtual int GetArea()
    {
        return Width * Height;
    }
}

public class Square : Rectangle
{
    public override int Width 
    { 
        get { return base.Width; }
        set { 
            base.Width = value;
            base.Height = value; // This breaks LSP
        }
    }
}

To adhere to LSP, we separate shape behavior into correct implementations:


// Better approach adhering to LSP
public interface IShape
{
    int GetArea();
}

public class Rectangle : IShape
{
    public int Width { get; set; }
    public int Height { get; set; }
    
    public int GetArea()
    {
        return Width * Height;
    }
}

public class Square : IShape
{
    public int Side { get; set; }
    
    public int GetArea()
    {
        return Side * Side;
    }
}

Benefits of LSP:

  • Promotes a reliable hierarchy, ensuring placeholder objects work seamlessly in place of base class instances.

Interface Segregation Principle (ISP)

Definition: Clients should not be forced to depend on interfaces they do not use.

Implementation in C#:

This example showcases a common mistake by violating ISP:


// Violation of ISP
public interface IWorker
{
    void Work();
    void Eat();
    void Sleep();
}

// Better approach with segregated interfaces
public interface IWorkable
{
    void Work();
}

public interface IEatable
{
    void Eat();
}

public interface ISleepable
{
    void Sleep();
}

Benefits of ISP:

  • Reduces side effects and promotes clean design, enhancing modularity.
  • Developers work with specific interfaces relevant to their implementations.

Dependency Inversion Principle (DIP)

Definition: High-level modules should not depend on low-level modules; both should depend on abstractions.

Implementation in C#:

Examine this flawed approach under DIP:


// Violation of DIP
public class NotificationService
{
    private readonly EmailSender _emailSender;
    
    public NotificationService()
    {
        _emailSender = new EmailSender();
    }
    
    public void SendNotification(string message, string recipient)
    {
        _emailSender.SendEmail(message, recipient);
    }
}

Implementing DIP effectively allows for a more flexible design:


// Better approach using DIP
public interface IMessageSender
{
    void SendMessage(string message, string recipient);
}

public class EmailSender : IMessageSender
{
    public void SendMessage(string message, string recipient)
    {
        // Email sending logic
    }
}

public class SMSSender : IMessageSender
{
    public void SendMessage(string message, string recipient)
    {
        // SMS sending logic
    }
}

public class NotificationService
{
    private readonly IMessageSender _messageSender;
    
    public NotificationService(IMessageSender messageSender)
    {
        _messageSender = messageSender;
    }
    
    public void SendNotification(string message, string recipient)
    {
        _messageSender.SendMessage(message, recipient);
    }
}

Benefits of DIP:

  • Enhances the flexibility and reusability of code.
  • Facilitates easier testing and mocking through abstraction.

Conclusion

Incorporating the SOLID principles in C# development results in several benefits, such as improved maintainability, enhanced testability, increased flexibility, better code organization, and reduced technical debt. As applications grow in scale and complexity, consciously applying these principles will contribute to producing robust, maintainable, and adaptable software systems.

By prioritizing SOLID principles in your coding practices, you won’t just write C# code— you’ll create software that stands the test of time.

If you’re interested in exploring further implementation examples, feel free to connect with me on LinkedIn or check out my GitHub. Happy coding!

FAQ

What are the SOLID principles?

The SOLID principles are five design principles that help software developers create more maintainable and flexible systems.

How does SRP improve code quality?

SRP enhances code quality by ensuring that a class has only one reason to change, making it easier to manage and understand.

What advantages does OCP provide?

OCP allows developers to extend functionalities without changing existing code, reducing bugs and improving code safety.

Can LSP help avoid bugs?

Yes, adhering to LSP promotes a reliable class hierarchy and helps to avoid bugs that can arise from unexpected behavior in subclasses.

Why is Dependency Inversion important?

DIP is crucial for reducing coupling and enhancing flexibility, making it easier to change or replace components without affecting high-level modules.

Architecting Scalable OCPP Compliant EV Charging Platforms

Architecting Scalable OCPP Compliant EV Charging Platforms

  • Understanding OCPP: A pivotal standard for interoperability in charging networks.
  • Benefits: Highlights include hardware agnosticism, interoperability, and enhanced security.
  • Key Components: Focuses on backend design, CSMS, and certification compliance.
  • Real-World Examples: Showcases implementations by EV Connect and AMPECO.
  • Future Considerations: Emphasizes upgradeability, scalability, and evolving security needs.

Table of Contents

Understanding OCPP

The Open Charge Point Protocol (OCPP) serves as the communication backbone between EV chargers and Charging Station Management Systems (CSMS). By facilitating interoperability, OCPP allows network operators to seamlessly integrate different brands of charging stations into a unified ecosystem. As a widely embraced standard, OCPP is crucial in establishing cohesive charging networks without being constrained by vendor-specific technologies.

Currently, multiple versions of OCPP are in play:

  • OCPP 1.5: The initial version that introduced basic functionalities for communication between chargers and CSMS.
  • OCPP 1.6: A more robust version adding features like improved error handling and enhanced security protocols.
  • OCPP 2.0.1: The latest iteration emphasizing advanced security and additional capabilities, which offers certifications for core and advanced modules through the Open Charge Alliance (OCA).

With the impending rollout of more certification modules in March 2025, OCPP compliance is set to become an industry-standard requirement that platform architects must consider when designing scalable charging solutions.

Benefits of OCPP-Based Architecture

Hardware Agnosticism

One of the standout features of OCPP is its ability to enable hardware-agnostic charging platforms. Network operators can integrate any OCPP-compliant charger, independent of the manufacturer. For instance, AMPECO’s platform claims compatibility with over 70 leading charging station manufacturers, emphasizing OCPP’s flexibility and adaptability. This characteristic allows businesses to scale their operations without being locked into a specific vendor’s ecosystem, providing freedom for future growth and innovation.

Interoperability and Future-Proofing

Adopting OCPP standards is pivotal for ensuring that charging networks remain compatible across generations of equipment. By focusing on OCPP compliance, operators mitigate the risk of fragmented systems that could render investments obsolete when technology advances. This forward-thinking approach is essential for maintaining competitive advantages in a fast-evolving marketplace.

Security Enhancements

With OCPP 2.0.1, security is elevated to new heights. The implementation of advanced security modules helps safeguard charging networks against emerging threats. For example, EV Connect’s OCPP 2.0.1 certification signifies a commitment to robust security measures, ensuring that as charging infrastructures scale, they retain their integrity and protection against potential vulnerabilities.

Key Components for Scalable Architecture

Architecting a scalable, OCPP-compliant platform necessitates careful consideration of several key components:

Backend System Design

A robust backend design is crucial for supporting multiple OCPP versions concurrently. Given that charging networks often incorporate a mix of equipment operating on different protocol versions, the architecture must be flexible and capable of handling various communication standards. For instance, AMPECO’s platform supports a triad of versions: OCPP 1.5, 1.6, and 2.0.1, demonstrating the importance of backward compatibility in charging network design.

Charging Station Management System (CSMS)

The CSMS acts as the nerve center for the entire charging network, directing communication between connected charging stations and managing their operational status. This component must be designed for horizontal scalability, enabling additional charging points to be integrated seamlessly as demand grows.

Certification Compliance

Pursuing official OCPP certification through the OCA is vital for ensuring interoperability and long-term viability. A certified platform is not only a mark of quality; it also guarantees adherence to global standards, laying the foundation for seamless integration with certified charging hardware. This compliance is fundamental for engendering trust among network operators and users alike.

Real-World Implementation Examples

EV Connect’s OCPP 2.0.1 Implementation

In March 2025, EV Connect announced its achievement of OCPP 2.0.1 certification for both Core and Advanced Security modules. This milestone illustrates their dedication to open standards and the interoperability of their solutions. By leveraging OCPP compliance, EV Connect enhances user experiences through a reliable and efficient charging ecosystem, marking a significant step toward long-term stability and adaptability in the industry.

AMPECO’s Multi-Version Support

AMPECO’s EV Charging Platform stands out as a prime example of scalable architecture capable of supporting multiple OCPP versions simultaneously. Their hardware-agnostic approach allows them to integrate diverse manufacturers through OCPP compliance, proving the viability and flexibility of their solution. Such an adaptable architecture is essential for operators seeking to broaden their network without compromising on service quality.

Future Considerations

When designing scalable OCPP-compliant platforms, architects and engineers must contemplate several key future-oriented factors:

  • Future Upgradeability: Establish a framework that allows for seamless upgrades to future OCPP versions without requiring a complete overhaul.
  • Backward Compatibility: Ensure that newer systems can still interact with older OCPP implementations, preserving existing investments.
  • Scalability: Design systems that can efficiently handle thousands to millions of charging sessions, accommodating growth trajectories as EV adoption rises significantly.
  • Evolving Security Protocols: Regularly update security measures to keep pace with emerging threats and standards in the cybersecurity landscape.
  • Integration with Energy Management Systems: Explore the potential for integrating charging platforms with broader energy management infrastructures for optimized performance and resource utilization.

Summary

In conclusion, designing scalable OCPP-compliant EV charging platforms involves intricate knowledge of the OCPP standard and its implications for interoperability, security, and future-proofing. As the EV market continues its rapid expansion, architects must emphasize the importance of building robust, flexible, and certification-compliant systems that can support a diverse ecosystem of charging stations.

By leveraging OCPP standards, businesses can forge ahead in developing agile, adaptable charging infrastructures that are not only capable of handling present demands but are also well-prepared for future innovations in the electric vehicle landscape.

If you’d like to discuss innovative approaches to OCPP compliance or explore architectural strategies for your next project, connect with me on LinkedIn, or check out my GitHub for implementation examples!

FAQs

What is OCPP?

OCPP stands for Open Charge Point Protocol, which is a communication standard that allows for interoperability between electric vehicle chargers and management systems.

Why is security important in OCPP?

Security in OCPP is vital to protect charging networks from cyber threats and to ensure the integrity and reliability of EV charging systems.

How does hardware agnosticism benefit operators?

Hardware agnosticism allows operators to choose among various OCPP-compliant chargers without being locked into a specific manufacturer, enhancing efficiency and scalability.

What are the key features of OCPP 2.0.1?

Key features of OCPP 2.0.1 include enhanced security protocols, better error handling, and the ability to support a broader range of functionalities for charging stations.

Fixing “spawn npx ENOENT” in Windows 11 When Adding MCP Server with Node/NPX

If you’re running into the error:

spawn npx ENOENT

while configuring an MCP (Multi-Context Plugin) server on Windows 11, you’re not alone. This error commonly appears when integrating tools like @upstash/context7-mcp using Node.js environments that rely on NPX, especially in cross-platform development.

This post explains:

  • What causes the “spawn npx ENOENT” error on Windows
  • The difference between two MCP server configuration methods
  • A working fix using cmd /c
  • Why this issue is specific to Windows

The Problem: “spawn npx ENOENT”

Using this configuration in your .mcprc.json or a similar setup:

{
  "mcpServers": {
    "context7": {
      "command": "npx",
      "args": ["-y", "@upstash/context7-mcp@latest"]
    }
  }
}

will cause the following error on Windows:

spawn npx ENOENT

This indicates that Node.js tried to spawn npx but couldn’t locate it in the system’s PATH.

Root Cause: Windows vs Unix Shell Behavior

On UNIX-like systems (Mac/Linux), spawn can run shell commands like npx directly. But Windows behaves differently:

  • Windows expects a .exe file to be explicitly referenced when spawning a process.
  • npx is not a native binary executable; it requires a shell to interpret and run it.
  • Node’s child_process.spawn does not invoke a shell by default unless specifically instructed.

In the failing example, the system tries to invoke npx directly as if it were a standalone executable, which doesn’t work on Windows.

The Fix: Wrapping with cmd /c

This configuration solves the issue:

{
  "context7": {
    "command": "cmd",
    "args": [
      "/c",
      "npx",
      "-y",
      "@upstash/context7-mcp@latest"
    ]
  }
}

Explanation

  • "cmd" invokes the Windows Command Prompt.
  • "/c" tells the shell to execute the command that follows.
  • The rest of the line (npx -y @upstash/context7-mcp@latest) is interpreted and executed properly by the shell.

This ensures that npx is resolved correctly and executed within a compatible environment.

Technical Comparison

Configuration Style Works on Windows? Shell Used? Reason
"command": "npx" No No Tries to execute npx directly without shell
"command": "cmd", "args": ["/c", "npx", ...] Yes Yes Executes the command within the Windows shell, allowing proper resolution

Best Practices

When using Node.js-based CLI tools across platforms:

  • Wrap shell commands using cmd /c (Windows) or sh -c (Unix)
  • Avoid assuming that commands like npx are executable as binaries
  • Test your scripts in both Windows and Unix environments when possible

Conclusion

If you’re encountering the spawn npx ENOENT error when configuring MCP servers on Windows 11, the fix is straightforward: use cmd /c to ensure shell interpretation. This small change ensures compatibility and prevents runtime errors across different operating systems.

OCPP 1.6: The Unsung Hero Powering Your EV Charge (But It’s Getting a Major Upgrade!) – A Deep Dive

Ever pulled up to a charging station, plugged in, and watched your electric vehicle magically start to juice up? That seamless experience isn’t magic; it’s the result of a communication protocol called OCPP – the Open Charge Point Protocol. And for a significant chapter in the EV revolution, version 1.6 was the quiet workhorse behind the scenes, ensuring smooth communication between your car and the charging infrastructure. Think of it as the universal translator that made charging stations and management systems speak the same language.

Why Should You Care About OCPP 1.6? (Even If “Protocol” Sounds Like Tech Jargon)

Let’s be honest, “protocol” doesn’t exactly scream excitement. But here’s why OCPP 1.6 mattered, and why it’s worth a quick chat:

  • Charging Anywhere, Anytime: Imagine if your phone only worked with certain cell towers. Chaos, right? OCPP 1.6 prevented that in the EV world. It meant you could plug into a wider range of chargers, regardless of who made them or managed them.
  • Remote Control for Operators: Think of charging station operators as air traffic controllers for electricity. OCPP 1.6 gave them the ability to monitor, control, and update stations remotely. This meant faster fixes, better service, and even dynamic pricing adjustments.
  • Data-Driven Optimization: OCPP 1.6 allowed for the collection of valuable data on charging patterns. This data helped operators understand usage, optimize pricing, and improve the overall charging experience.

Taking a Slightly Deeper Dive (But Still Keeping it Real)

So, how did this “universal translator” actually work? It broke down charging tasks into manageable “profiles,” like departments in a well-organized company:

  • Core Profile: The Front Desk: This is where the basic interactions happened: verifying user IDs, starting and stopping charging sessions, and reporting energy usage. Messages like Authorize, BootNotification, and MeterValues handled these crucial tasks.
  • Firmware Management: The IT Department: Keeping charging stations up-to-date is vital for security and functionality. This profile allowed for remote firmware updates, ensuring stations were running the latest software.
  • Local Authorization List: The Offline Backup: Ever lose internet connection? This profile allowed charging to continue even when the network was down, using a local list of authorized users.
  • Reservation Profile: The Booking System: This allowed users to reserve charging slots, ensuring a spot was available when needed.
  • Smart Charging Profile: The Energy Optimizer: This profile enabled dynamic energy management, balancing grid load and optimizing charging schedules.
  • Remote Trigger Profile: The On-Demand Information Request: This allowed the central system to request specific data from the charging station whenever needed.

Understanding Message Structure: JSON (OCPP-J)

Since JSON is the more prevalent format in OCPP 1.6, let’s focus on that. Remember, JSON messages are structured as arrays with four key elements:

  1. MessageTypeId: Indicates the message type (2 = CALL, 3 = CALLRESULT, 4 = CALLERROR).
  2. UniqueId: Matches requests and responses.
  3. Action: The OCPP message name (e.g., “Authorize,” “MeterValues”).
  4. Payload: The message’s data in JSON object format.

Example Messages:

  1. Authorize Request (CALL):
    • [ 2, “12345”, “Authorize”, { “idTag”: “ABCDEF1234567890” } ]
  2. Authorize Response (CALLRESULT):
    • [ 3, “12345”, “Authorize”, { “idTagInfo”: { “status”: “Accepted” } } ]
  3. MeterValues Request (CALL):
    • [ 2, “67890”, “MeterValues”, { “connectorId”: 1, “transactionId”: 9876, “meterValue”: [ { “timestamp”: “2024-10-27T10:00:00Z”, “sampledValue”: [ { “value”: “1234”, “unit”: “Wh”, “measurand”: “Energy.Active.Import.Register” } ] } ] } ]
  4. StatusNotification Request (CALL):
    • [ 2, “13579”, “StatusNotification”, { “connectorId”: 1, “status”: “Charging”, “timestamp”: “2024-10-27T10:05:00Z” } ]

OCPP 1.6 Message Rundown:

Here’s a quick overview of all the messages in OCPP 1.6, organized by profile:

Core Profile:

  • Authorize: Checks user authorization.
  • BootNotification: Charge Point sends upon startup.
  • ChangeAvailability: Sets Charge Point/connector availability.
  • ChangeConfiguration: Modifies Charge Point configuration.
  • ClearCache: Clears local authorization cache.
  • DataTransfer: Vendor-specific data exchange.
  • GetConfiguration: Retrieves Charge Point configuration.
  • Heartbeat: Charge Point sends to indicate online status.
  • MeterValues: Reports energy consumption.
  • RemoteStartTransaction/RemoteStopTransaction: Remote charging control.
  • Reset: Reboots the Charge Point.
  • StartTransaction: Charge Point sends at charging start.
  • StatusNotification: Reports Charge Point status.
  • StopTransaction: Charge Point sends at charging end.
  • UnlockConnector: Remote connector release.

Firmware Management Profile:

  • GetDiagnostics: Requests diagnostic logs.
  • DiagnosticsStatusNotification: Reports diagnostic log upload status.
  • FirmwareStatusNotification: Reports firmware update status.
  • UpdateFirmware: Initiates firmware update.

Local Authorization List Management Profile:

  • GetLocalListVersion: Checks local list version.
  • SendLocalList: Updates local authorization list.

Reservation Profile:

  • ReserveNow: Requests a reservation.
  • CancelReservation: Cancels a reservation.

Smart Charging Profile:

  • SetChargingProfile: Sets charging schedules/limits.
  • ClearChargingProfile: Removes charging profiles.
  • GetCompositeSchedule: Requests active charging schedule.

Remote Trigger Profile:

  • TriggerMessage: Requests specific messages from Charge Point.

Security: The Silent Guardian (And Where We Need to Step Up)

Security is paramount in the EV world. After all, we’re dealing with sensitive data and high-voltage electricity. OCPP 1.6 incorporated:

  • TLS Encryption: The Secure Tunnel: This encrypted communication between charging stations and management systems, protecting data from unauthorized access.
  • Authentication Mechanisms: The ID Check: This verified the identity of users and devices, ensuring only authorized parties could access the charging infrastructure.
  • Secure Firmware Updates: The Software Integrity Check: This ensured that firmware updates were legitimate and not malicious software.

However, OCPP 1.6 wasn’t perfect. Some of the older security methods, like basic username/password authentication, were vulnerable to attacks. And vulnerabilities regarding how messages were handled, have been discovered.

The Future is Here: OCPP 2.0.1 and Beyond – A Necessary Evolution

While OCPP 1.6 served its purpose, the EV landscape is rapidly evolving. That’s why we’re seeing the rise of OCPP 2.0.1 and OCPP 2.1 – a major upgrade in terms of features and security:

  • Enhanced Device Management: More granular control and monitoring of charging stations.
  • Stronger Security Protocols: Advanced encryption, certificate-based authentication, and defined security profiles.
  • Advanced Smart Charging Capabilities: Integration with energy management systems, dynamic load balancing, and support for ISO 15118.
  • Native ISO 15118 Support: Enabling features like “Plug & Charge,” where EVs can automatically authenticate and charge without user intervention.
  • Bidirectional Charging (V2G/V2X): Enabling EVs to send power back to the grid, transforming them into mobile energy storage units.
  • Improved Error Handling and Data Compression: Making the system more robust and efficient.

The Human Takeaway: Embracing the Future of EV Charging

OCPP 1.6 was a crucial stepping stone in the EV revolution, laying the foundation for interoper

What is OCPP? A Complete Guide to the EV Charging Communication Protocol

As electric vehicles (EVs) become more mainstream, the infrastructure that powers them is evolving rapidly. Behind the scenes of every public EV charger is a smart communication layer that ensures chargers operate efficiently, securely, and interoperably. That communication standard is called OCPP — Open Charge Point Protocol.

In this article, we’ll break down what OCPP is, why it matters, how it works, and the different versions available today. Whether you’re an EV driver, charging network operator, or tech enthusiast, this guide will help you understand how OCPP is shaping the future of electric mobility.

🔌 What is OCPP?

OCPP (Open Charge Point Protocol) is an application protocol used to enable communication between Electric Vehicle Supply Equipment (EVSE)—commonly known as EV chargers—and a Central Management System (CMS), often referred to as a Charge Point Operator (CPO) backend.

It is vendor-neutral and open-source, developed by the Open Charge Alliance (OCA) to standardize how EV chargers and management systems talk to each other.

Think of OCPP as the universal “language” between the charging station and the software that manages it.

⚙️ How OCPP Works

OCPP defines a set of WebSocket-based or SOAP-based messages that are exchanged between the client (charge point) and the server (backend system).

For example:

  • When a driver plugs in their EV, the charger sends a StartTransaction message to the backend.
  • The backend authenticates the session and sends a StartTransactionConfirmation.
  • Once charging ends, the charger sends a StopTransaction message.

Other key message types include:

  • Heartbeat: to ensure the charger is online
  • StatusNotification: to report charger availability
  • BootNotification: sent when the charger powers up
  • MeterValues: for usage data and billing
  • FirmwareUpdate, Diagnostics, and RemoteStart/Stop commands

These interactions enable remote control, monitoring, diagnostics, and software updates — all of which are essential for smart charging infrastructure.

🚀 Why is OCPP Important?

  • Interoperability: OCPP allows chargers from different manufacturers to connect to any compliant backend, reducing vendor lock-in.
  • Scalability: Operators can manage thousands of chargers efficiently using a single system.
  • Smart Charging: OCPP supports load balancing, grid integration, and energy optimization.
  • Security: Latest versions support enhanced encryption, authentication, and access control mechanisms.

OCPP is especially important for public EV charging networks, fleet operators, municipalities, and utility companies that require flexibility and operational efficiency.

🔢 OCPP Versions Explained

Over the years, OCPP has evolved to meet the growing demands of EV infrastructure. Here’s a look at its major versions:

OCPP 1.2 (2009)

  • The first version
  • Limited functionality
  • Largely outdated and no longer used

OCPP 1.5

  • Improved stability
  • Better message structure
  • Still lacks advanced features

OCPP 1.6 (2015)

  • Most widely deployed version
  • Supports JSON over WebSocket and SOAP
  • Adds:
    • Remote Start/Stop
    • Smart Charging (Load Profiles)
    • Firmware Management
    • Diagnostics
  • Still supported by most major networks today

OCPP 2.0 (2018)

  • Major overhaul of the protocol
  • Adds:
    • Device Management
    • Security Profiles
    • ISO 15118 integration (Plug & Charge)
    • Improved Smart Charging
    • Better data modeling

OCPP 2.0.1 (2020)

  • The latest and stable version
  • Focused on bug fixes and practical enhancements from real-world implementations
  • Growing adoption in next-generation networks

📝 Note: OCPP 2.x is not backward compatible with 1.6, but many platforms support dual-stack operation.

🛠️ Technical Architecture Overview

A typical OCPP-based EV charging setup consists of:

  1. Charge Point (Client):
    • Hardware installed at EV charging stations
    • Acts as the OCPP client
    • Initiates communication
  2. Central System (Server):
    • Backend system that processes OCPP messages
    • Manages user sessions, pricing, diagnostics, and energy usage
  3. Communication Layer:
    • Typically uses WebSockets over TLS for secure, real-time, full-duplex communication
    • Some older implementations use SOAP over HTTP
  4. Optional Add-ons:
    • Token authentication (RFID, app-based)
    • OCPI/OSCP/ISO 15118 integration for roaming and advanced smart grid features

🔒 Security in OCPP

Starting with OCPP 2.0, the protocol includes support for secure communication profiles, including:

  • TLS Encryption
  • Client-side and server-side certificates
  • Secure firmware updates
  • Signed metering and transaction data

These features make OCPP ready for enterprise-scale, mission-critical deployments.

🌍 Real-World Use Cases

  • Public Charging Networks: Roaming across different charger brands
  • Fleet Management: Real-time diagnostics and energy consumption tracking
  • Retail Sites & Fuel Stations: Revenue tracking and load optimization
  • Smart Cities & Utilities: Demand response and grid integration

📈 Final Thoughts

OCPP is the backbone of modern EV charging infrastructure. As the electric vehicle ecosystem expands, having a universal, open, and future-ready protocol like OCPP ensures that EV charging remains reliable, scalable, and secure.

Whether you’re deploying 5 chargers in a parking lot or 5,000 across a city, OCPP gives you the flexibility to choose the hardware and software that suit your needs — all while ensuring interoperability with the rest of the EV ecosystem.

Want to learn more about OCPP, EV charging, or smart infrastructure? Follow this blog for future deep-dives, comparisons, and real-world implementation guides!

Scraping JSON-LD from a Next.js Site with Crawl4AI: My Debugging Journey

Scraping data from modern websites can feel like a puzzle, especially when they’re built with Next.js and all that fancy JavaScript magic. Recently, I needed to pull some product info—like names, prices, and a few extra details—from an e-commerce page that was giving me a headache. The site (let’s just call it https://shop.example.com/products/[hidden-stuff]) used JSON-LD tucked inside a <script> tag, but my first attempts with Crawl4AI came up empty. Here’s how I cracked it, step by step, and got the data I wanted.

The Headache: Empty Results from a Next.js Page

I was trying to grab details from a product page—think stuff like the item name, description, member vs. non-member prices, and some category info. The JSON-LD looked something like this (I’ve swapped out the real details for a fake example):

{
  "@context": "https://schema.org",
  "@type": "Product",
  "name": "Beginner’s Guide to Coffee Roasting",
  "description": "Learn the basics of roasting your own coffee beans at home. Recorded live last summer.",
  "provider": {
    "@type": "Organization",
    "name": "Bean Enthusiast Co."
  },
  "offers": [
    {"@type": "Offer", "price": 49.99, "priceCurrency": "USD"},
    {"@type": "Offer", "price": 59.99, "priceCurrency": "USD"}
  ],
  "skillLevel": "Beginner",
  "hasWorkshop": [
    {
      "@type": "WorkshopInstance",
      "deliveryMethod": "Online",
      "workshopSchedule": {"startDate": "2024-08-15"}
    }
  ]
}

My goal was to extract this, label the cheaper price as “member” and the higher one as “non-member,” and snag extras like skillLevel and deliveryMethod. Simple, right? Nope. My first stab at it with Crawl4AI gave me nothing—just an empty [].

What Went Wrong: Next.js Threw Me a Curveball

Next.js loves doing things dynamically, which means the JSON-LD I saw in my browser’s dev tools wasn’t always in the raw HTML Crawl4AI fetched. I started with this basic setup:

from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy

schema = {
    "name": "Product Schema",
    "baseSelector": "script[type='application/ld+json']",
    "fields": [{"name": "json_ld_content", "selector": "script[type='application/ld+json']", "type": "text"}]
}

async def extract_data(url):
    async with AsyncWebCrawler() as crawler:
        result = await crawler.arun(url=url, extraction_strategy=JsonCssExtractionStrategy(schema))
        extracted_data = json.loads(result.extracted_content)
        print(extracted_data)

# Output: []

Empty. Zilch. I dug into the debug output and saw the JSON-LD was in result.html, but result.extracted_content was blank. Turns out, Next.js was injecting that <script> tag after the page loaded, and Crawl4AI wasn’t catching it without some extra nudging.

How I Fixed It: A Workaround That Worked

After banging my head against the wall, I figured out I needed to make Crawl4AI wait for the JavaScript to do its thing and then grab the JSON-LD myself from the HTML. Here’s the code that finally worked:

import json
import asyncio
from crawl4ai import AsyncWebCrawler

async def extract_product_schema(url):
    async with AsyncWebCrawler(verbose=True, user_agent="Mozilla/5.0") as crawler:
        print(f"Checking out: {url}")
        result = await crawler.arun(
            url=url,
            js_code=[
                "window.scrollTo(0, document.body.scrollHeight);",  # Wake up the page
                "await new Promise(resolve => setTimeout(resolve, 5000));"  # Give it 5 seconds
            ],
            bypass_cache=True,
            timeout=30
        )

        if not result.success:
            print(f"Oops, something broke: {result.error_message}")
            return None

        # Digging into the HTML myself
        html = result.html
        start_marker = '<script type="application/ld+json">'
        end_marker = '</script>'
        start_idx = html.find(start_marker) + len(start_marker)
        end_idx = html.find(end_marker, start_idx)

        if start_idx == -1 or end_idx == -1:
            print("Couldn’t find the JSON-LD.")
            return None

        json_ld_raw = html[start_idx:end_idx].strip()
        json_ld = json.loads(json_ld_raw)

        # Sorting out the product details
        if json_ld.get("@type") == "Product":
            offers = sorted(
                [{"price": o.get("price"), "priceCurrency": o.get("priceCurrency")} for o in json_ld.get("offers", [])],
                key=lambda x: x["price"]
            )
            workshop_instances = json_ld.get("hasWorkshop", [])
            schedule = workshop_instances[0].get("workshopSchedule", {}) if workshop_instances else {}
            
            product_info = {
                "name": json_ld.get("name"),
                "description": json_ld.get("description"),
                "providerName": json_ld.get("provider", {}).get("name"),
                "memberPrice": offers[0] if offers else None,
                "nonMemberPrice": offers[-1] if offers else None,
                "skillLevel": json_ld.get("skillLevel"),
                "deliveryMethod": workshop_instances[0].get("deliveryMethod") if workshop_instances else None,
                "startDate": schedule.get("startDate")
            }
            return product_info
        print("No product data here.")
        return None

async def main():
    url = "https://shop.example.com/products/[hidden-stuff]"
    product_data = await extract_product_schema(url)
    if product_data:
        print("Here’s what I got:")
        print(json.dumps(product_data, indent=2))

if __name__ == "__main__":
    asyncio.run(main())

What I Got Out of It

{
  "name": "Beginner’s Guide to Coffee Roasting",
  "description": "Learn the basics of roasting your own coffee beans at home. Recorded live last summer.",
  "providerName": "Bean Enthusiast Co.",
  "memberPrice": {
    "price": 49.99,
    "priceCurrency": "USD"
  },
  "nonMemberPrice": {
    "price": 59.99,
    "priceCurrency": "USD"
  },
  "skillLevel": "Beginner",
  "deliveryMethod": "Online",
  "startDate": "2024-08-15"
}

How I Made It Work

Waiting for JavaScript: I told Crawl4AI to scroll and hang out for 5 seconds with js_code. That gave Next.js time to load everything up.DIY Parsing: The built-in extractor wasn’t cutting it, so I searched the HTML for the <script> tag and pulled the JSON-LD out myself.Price Tags: Sorted the prices and called the lowest “member” and the highest “non-member”—seemed like a safe bet for this site.

What I Learned Along the Way

  • Next.js is Tricky: It’s not just about the HTML you get—it’s about what shows up after the JavaScript runs. Timing is everything.
  • Sometimes You Gotta Get Hands-On: When the fancy tools didn’t work, digging into the raw HTML saved me.
  • Debugging Pays Off: Printing out the HTML and extractor output showed me exactly where things were going wrong.

Azure Service Bus Peek-Lock: A Comprehensive Guide to Reliable Message Processing

Working with Peek-Lock in Azure Service Bus: A Practical Guide

In many distributed systems, reliable message handling is a top priority. When I first started building an order processing application, I learned very quickly that losing even one message could cause major headaches. That’s exactly where Azure Service Bus and its Peek-Lock mode came to the rescue. By using Peek-Lock, you don’t remove the message from the queue as soon as you receive it. Instead, you lock it for a certain period, process it, and then decide what to do next—complete, abandon, dead-letter, or defer. Here’s how it all fits together.

Why Peek-Lock Matters

Peek-Lock is one of the two receiving modes offered by Azure Service Bus. The other is Receive and Delete, which automatically removes messages from the queue upon receipt. While that might be fine for scenarios where occasional message loss is acceptable, many real-world applications need stronger guarantees.

  1. Reliability: With Peek-Lock, if processing fails, you can abandon the message. This makes it visible again for another attempt, reducing the risk of data loss.
  2. Explicit Control: You decide when a message is removed. After you successfully handle the message (e.g., update a database or complete a transaction), you explicitly mark it as complete.
  3. Error Handling: If the same message repeatedly fails, you can dead-letter it for investigation. This helps avoid getting stuck in an endless processing loop.

What Happens If the Lock Expires?

By default, the lock is held for a certain period (often 30 seconds, which can be adjusted). If your code doesn’t complete or abandon the message before the lock expires, the message becomes visible to other receivers. To handle potentially lengthy processes, you can renew the lock programmatically, although that introduces additional complexity. The key takeaway is that you should design your service to either complete or abandon messages quickly, or renew the lock if more time is truly necessary.

Default Peek-Lock in Azure Functions

When you use Azure Service Bus triggers in Azure Functions, you generally don’t need to configure or manage the Peek-Lock behavior yourself. According to the official documentation, the default behavior in Azure Functions is already set to Peek-Lock. This means you can focus on your function’s core logic without explicitly dealing with message locking or completion in most scenarios.

Don’t Swallow Exceptions

One important detail to note is that in Azure Functions, any unhandled exceptions in your function code will signal to the runtime that message processing failed. This prevents the function from automatically completing the message, allowing the Service Bus to retry later. However, if you wrap your logic in a try/catch block and inadvertently swallow the exception—meaning you catch the error without rethrowing or handling it properly—you might unintentionally signal success. That would lead to the message being completed even though a downstream service might have failed.

Recommendation:

  • If you must use a try/catch, make sure errors are re-thrown or handled in a way that indicates failure if the message truly hasn’t been processed successfully. Otherwise, you’ll end up completing the message and losing valuable information about the error.

Typical Use Cases

  1. Financial Transactions: Losing a message that represents a monetary transaction is not an option. Peek-Lock ensures messages remain available until your code confirms it was successfully processed.
  2. Critical Notifications: If you have an alerting system that notifies users about important events, you don’t want those notifications disappearing in case of a crash.
  3. Order Processing: In ecommerce or supply chain scenarios, every order message has to be accounted for. Peek-Lock helps avoid partial or lost orders due to transient errors.

Example in C#

Here’s a short snippet that demonstrates how you can receive messages in Peek-Lock mode using the Azure.Messaging.ServiceBus library:

using System;
using System.Threading.Tasks;
using Azure.Messaging.ServiceBus;

public class PeekLockExample
{
    private const string ConnectionString = "<YOUR_SERVICE_BUS_CONNECTION_STRING>";
    private const string QueueName = "<YOUR_QUEUE_NAME>";

    public async Task RunPeekLockSample()
    {
        // Create a Service Bus client
        var client = new ServiceBusClient(ConnectionString);

        // Create a receiver in Peek-Lock mode
        var receiver = client.CreateReceiver(
            QueueName, 
            new ServiceBusReceiverOptions 
            { 
                ReceiveMode = ServiceBusReceiveMode.PeekLock 
            }
        );

        try
        {
            // Attempt to receive a single message
            ServiceBusReceivedMessage message = await receiver.ReceiveMessageAsync(TimeSpan.FromSeconds(10));

            if (message != null)
            {
                // Process the message
                string body = message.Body.ToString();
                Console.WriteLine($"Processing message: {body}");

                // If processing is successful, complete the message
                await receiver.CompleteMessageAsync(message);
                Console.WriteLine("Message completed and removed from the queue.");
            }
            else
            {
                Console.WriteLine("No messages were available to receive.");
            }
        }
        catch (Exception ex)
        {
            Console.WriteLine($"An error occurred: {ex.Message}");
            // Optionally handle or log the exception
        }
        finally
        {
            // Clean up resources
            await receiver.CloseAsync();
            await client.DisposeAsync();
        }
    }
}

What’s Happening Here?

  • We create a ServiceBusClient to connect to Azure Service Bus.
  • We specify ServiceBusReceiveMode.PeekLock when creating the receiver.
  • The code then attempts to receive one message and processes it.
  • If everything goes smoothly, we call CompleteMessageAsync to remove it from the queue. If something goes wrong, the message remains locked until the lock expires or until we choose to abandon it.

Final Thoughts

Peek-Lock strikes a balance between reliability and performance. It ensures you won’t lose critical data while giving you the flexibility to handle errors gracefully. Whether you’re dealing with financial operations, critical user notifications, or any scenario where each message must be processed correctly, Peek-Lock is an indispensable tool in your Azure Service Bus arsenal.

In Azure Functions, you get this benefit without having to manage the locking details, so long as you don’t accidentally swallow your exceptions. For other applications, adopting Peek-Lock might demand a bit more coding, but it’s well worth it if you need guaranteed, at-least-once message delivery.

Whether you’re building a simple queue-based workflow or a complex event-driven system, Peek-Lock ensures your messages remain safe until you decide they’re processed successfully. It’s a powerful approach that balances performance with reliability, which is why it’s a must-know feature for developers relying on Azure Service Bus.

Page 1 of 19

Powered by WordPress & Theme by Anders Norén