Technical Insights: Azure, .NET, Dynamics 365 & EV Charging Architecture

Author: fransiscuss Page 1 of 19

Mastering SOLID Principles in C# Development

SOLID Pattern Object Oriented Design and How to Use It in C#

  • Enhances maintainability and scalability of applications.
  • Guides developers in crafting robust software systems.
  • Encourages extensible software architectures.
  • Improves reliability and promotes clean design.
  • Facilitates easier testing and mocking through abstraction.

Table of Contents

Understanding SOLID Principles

The SOLID acronym comprises five principles:

  1. Single Responsibility Principle (SRP)
  2. Open/Closed Principle (OCP)
  3. Liskov Substitution Principle (LSP)
  4. Interface Segregation Principle (ISP)
  5. Dependency Inversion Principle (DIP)

While these principles are applicable across various programming languages, they align exceptionally well with C# due to its robust type system and object-oriented capabilities. Let’s delve into each principle in detail.

Single Responsibility Principle (SRP)

Definition: A class should have only one reason to change, meaning it should only have one job or responsibility.

Implementation in C#:

Consider the following implementation where a class violates SRP by performing multiple roles:


// Bad example - multiple responsibilities
public class UserService
{
    public void RegisterUser(string email, string password)
    {
        // Register user logic
        // Send email logic
        // Log activity
    }
}

In contrast, adhering to the Single Responsibility Principle leads to a more maintainable structure:


// Better example - single responsibility
public class UserRegistration
{
    private readonly EmailService _emailService;
    private readonly LoggingService _loggingService;
    
    public UserRegistration(EmailService emailService, LoggingService loggingService)
    {
        _emailService = emailService;
        _loggingService = loggingService;
    }
    
    public void RegisterUser(string email, string password)
    {
        // Only handle user registration
        var user = new User(email, password);
        SaveUserToDatabase(user);
        
        _emailService.SendWelcomeEmail(email);
        _loggingService.LogActivity("User registered: " + email);
    }
}

Benefits of SRP:

  • Improved maintainability as each class has a distinct responsibility.
  • Easier collaboration; team members can work on separate functionalities with minimal overlap.

Open/Closed Principle (OCP)

Definition: Software entities should be open for extension but closed for modification.

Implementation in C#:

Let’s assess a traditional approach that violates the OCP:


// Bad approach
public class AreaCalculator
{
    public double CalculateArea(object shape)
    {
        if (shape is Rectangle rectangle)
            return rectangle.Width * rectangle.Height;
        else if (shape is Circle circle)
            return Math.PI * circle.Radius * circle.Radius;
        
        throw new NotSupportedException("Shape not supported");
    }
}

By implementing the OCP, we can extend functionality without altering existing code:


// Better approach using OCP
public interface IShape
{
    double CalculateArea();
}

public class Rectangle : IShape
{
    public double Width { get; set; }
    public double Height { get; set; }
    
    public double CalculateArea()
    {
        return Width * Height;
    }
}

public class Circle : IShape
{
    public double Radius { get; set; }
    
    public double CalculateArea()
    {
        return Math.PI * Radius * Radius;
    }
}

// Now we can add new shapes without modifying existing code

Benefits of OCP:

  • Encourages the development of extensible software architectures.
  • Reduces the risk of introducing bugs to existing functionalities.

Liskov Substitution Principle (LSP)

Definition: Objects of a superclass should be replaceable with objects of its subclasses without affecting the correctness of the program.

Implementation in C#:

Let’s critique this implementation which violates LSP:


// Violation of LSP
public class Rectangle
{
    public virtual int Width { get; set; }
    public virtual int Height { get; set; }
    
    public virtual int GetArea()
    {
        return Width * Height;
    }
}

public class Square : Rectangle
{
    public override int Width 
    { 
        get { return base.Width; }
        set { 
            base.Width = value;
            base.Height = value; // This breaks LSP
        }
    }
}

To adhere to LSP, we separate shape behavior into correct implementations:


// Better approach adhering to LSP
public interface IShape
{
    int GetArea();
}

public class Rectangle : IShape
{
    public int Width { get; set; }
    public int Height { get; set; }
    
    public int GetArea()
    {
        return Width * Height;
    }
}

public class Square : IShape
{
    public int Side { get; set; }
    
    public int GetArea()
    {
        return Side * Side;
    }
}

Benefits of LSP:

  • Promotes a reliable hierarchy, ensuring placeholder objects work seamlessly in place of base class instances.

Interface Segregation Principle (ISP)

Definition: Clients should not be forced to depend on interfaces they do not use.

Implementation in C#:

This example showcases a common mistake by violating ISP:


// Violation of ISP
public interface IWorker
{
    void Work();
    void Eat();
    void Sleep();
}

// Better approach with segregated interfaces
public interface IWorkable
{
    void Work();
}

public interface IEatable
{
    void Eat();
}

public interface ISleepable
{
    void Sleep();
}

Benefits of ISP:

  • Reduces side effects and promotes clean design, enhancing modularity.
  • Developers work with specific interfaces relevant to their implementations.

Dependency Inversion Principle (DIP)

Definition: High-level modules should not depend on low-level modules; both should depend on abstractions.

Implementation in C#:

Examine this flawed approach under DIP:


// Violation of DIP
public class NotificationService
{
    private readonly EmailSender _emailSender;
    
    public NotificationService()
    {
        _emailSender = new EmailSender();
    }
    
    public void SendNotification(string message, string recipient)
    {
        _emailSender.SendEmail(message, recipient);
    }
}

Implementing DIP effectively allows for a more flexible design:


// Better approach using DIP
public interface IMessageSender
{
    void SendMessage(string message, string recipient);
}

public class EmailSender : IMessageSender
{
    public void SendMessage(string message, string recipient)
    {
        // Email sending logic
    }
}

public class SMSSender : IMessageSender
{
    public void SendMessage(string message, string recipient)
    {
        // SMS sending logic
    }
}

public class NotificationService
{
    private readonly IMessageSender _messageSender;
    
    public NotificationService(IMessageSender messageSender)
    {
        _messageSender = messageSender;
    }
    
    public void SendNotification(string message, string recipient)
    {
        _messageSender.SendMessage(message, recipient);
    }
}

Benefits of DIP:

  • Enhances the flexibility and reusability of code.
  • Facilitates easier testing and mocking through abstraction.

Conclusion

Incorporating the SOLID principles in C# development results in several benefits, such as improved maintainability, enhanced testability, increased flexibility, better code organization, and reduced technical debt. As applications grow in scale and complexity, consciously applying these principles will contribute to producing robust, maintainable, and adaptable software systems.

By prioritizing SOLID principles in your coding practices, you won’t just write C# code— you’ll create software that stands the test of time.

If you’re interested in exploring further implementation examples, feel free to connect with me on LinkedIn or check out my GitHub. Happy coding!

FAQ

What are the SOLID principles?

The SOLID principles are five design principles that help software developers create more maintainable and flexible systems.

How does SRP improve code quality?

SRP enhances code quality by ensuring that a class has only one reason to change, making it easier to manage and understand.

What advantages does OCP provide?

OCP allows developers to extend functionalities without changing existing code, reducing bugs and improving code safety.

Can LSP help avoid bugs?

Yes, adhering to LSP promotes a reliable class hierarchy and helps to avoid bugs that can arise from unexpected behavior in subclasses.

Why is Dependency Inversion important?

DIP is crucial for reducing coupling and enhancing flexibility, making it easier to change or replace components without affecting high-level modules.

Architecting Scalable OCPP Compliant EV Charging Platforms

Architecting Scalable OCPP Compliant EV Charging Platforms

  • Understanding OCPP: A pivotal standard for interoperability in charging networks.
  • Benefits: Highlights include hardware agnosticism, interoperability, and enhanced security.
  • Key Components: Focuses on backend design, CSMS, and certification compliance.
  • Real-World Examples: Showcases implementations by EV Connect and AMPECO.
  • Future Considerations: Emphasizes upgradeability, scalability, and evolving security needs.

Table of Contents

Understanding OCPP

The Open Charge Point Protocol (OCPP) serves as the communication backbone between EV chargers and Charging Station Management Systems (CSMS). By facilitating interoperability, OCPP allows network operators to seamlessly integrate different brands of charging stations into a unified ecosystem. As a widely embraced standard, OCPP is crucial in establishing cohesive charging networks without being constrained by vendor-specific technologies.

Currently, multiple versions of OCPP are in play:

  • OCPP 1.5: The initial version that introduced basic functionalities for communication between chargers and CSMS.
  • OCPP 1.6: A more robust version adding features like improved error handling and enhanced security protocols.
  • OCPP 2.0.1: The latest iteration emphasizing advanced security and additional capabilities, which offers certifications for core and advanced modules through the Open Charge Alliance (OCA).

With the impending rollout of more certification modules in March 2025, OCPP compliance is set to become an industry-standard requirement that platform architects must consider when designing scalable charging solutions.

Benefits of OCPP-Based Architecture

Hardware Agnosticism

One of the standout features of OCPP is its ability to enable hardware-agnostic charging platforms. Network operators can integrate any OCPP-compliant charger, independent of the manufacturer. For instance, AMPECO’s platform claims compatibility with over 70 leading charging station manufacturers, emphasizing OCPP’s flexibility and adaptability. This characteristic allows businesses to scale their operations without being locked into a specific vendor’s ecosystem, providing freedom for future growth and innovation.

Interoperability and Future-Proofing

Adopting OCPP standards is pivotal for ensuring that charging networks remain compatible across generations of equipment. By focusing on OCPP compliance, operators mitigate the risk of fragmented systems that could render investments obsolete when technology advances. This forward-thinking approach is essential for maintaining competitive advantages in a fast-evolving marketplace.

Security Enhancements

With OCPP 2.0.1, security is elevated to new heights. The implementation of advanced security modules helps safeguard charging networks against emerging threats. For example, EV Connect’s OCPP 2.0.1 certification signifies a commitment to robust security measures, ensuring that as charging infrastructures scale, they retain their integrity and protection against potential vulnerabilities.

Key Components for Scalable Architecture

Architecting a scalable, OCPP-compliant platform necessitates careful consideration of several key components:

Backend System Design

A robust backend design is crucial for supporting multiple OCPP versions concurrently. Given that charging networks often incorporate a mix of equipment operating on different protocol versions, the architecture must be flexible and capable of handling various communication standards. For instance, AMPECO’s platform supports a triad of versions: OCPP 1.5, 1.6, and 2.0.1, demonstrating the importance of backward compatibility in charging network design.

Charging Station Management System (CSMS)

The CSMS acts as the nerve center for the entire charging network, directing communication between connected charging stations and managing their operational status. This component must be designed for horizontal scalability, enabling additional charging points to be integrated seamlessly as demand grows.

Certification Compliance

Pursuing official OCPP certification through the OCA is vital for ensuring interoperability and long-term viability. A certified platform is not only a mark of quality; it also guarantees adherence to global standards, laying the foundation for seamless integration with certified charging hardware. This compliance is fundamental for engendering trust among network operators and users alike.

Real-World Implementation Examples

EV Connect’s OCPP 2.0.1 Implementation

In March 2025, EV Connect announced its achievement of OCPP 2.0.1 certification for both Core and Advanced Security modules. This milestone illustrates their dedication to open standards and the interoperability of their solutions. By leveraging OCPP compliance, EV Connect enhances user experiences through a reliable and efficient charging ecosystem, marking a significant step toward long-term stability and adaptability in the industry.

AMPECO’s Multi-Version Support

AMPECO’s EV Charging Platform stands out as a prime example of scalable architecture capable of supporting multiple OCPP versions simultaneously. Their hardware-agnostic approach allows them to integrate diverse manufacturers through OCPP compliance, proving the viability and flexibility of their solution. Such an adaptable architecture is essential for operators seeking to broaden their network without compromising on service quality.

Future Considerations

When designing scalable OCPP-compliant platforms, architects and engineers must contemplate several key future-oriented factors:

  • Future Upgradeability: Establish a framework that allows for seamless upgrades to future OCPP versions without requiring a complete overhaul.
  • Backward Compatibility: Ensure that newer systems can still interact with older OCPP implementations, preserving existing investments.
  • Scalability: Design systems that can efficiently handle thousands to millions of charging sessions, accommodating growth trajectories as EV adoption rises significantly.
  • Evolving Security Protocols: Regularly update security measures to keep pace with emerging threats and standards in the cybersecurity landscape.
  • Integration with Energy Management Systems: Explore the potential for integrating charging platforms with broader energy management infrastructures for optimized performance and resource utilization.

Summary

In conclusion, designing scalable OCPP-compliant EV charging platforms involves intricate knowledge of the OCPP standard and its implications for interoperability, security, and future-proofing. As the EV market continues its rapid expansion, architects must emphasize the importance of building robust, flexible, and certification-compliant systems that can support a diverse ecosystem of charging stations.

By leveraging OCPP standards, businesses can forge ahead in developing agile, adaptable charging infrastructures that are not only capable of handling present demands but are also well-prepared for future innovations in the electric vehicle landscape.

If you’d like to discuss innovative approaches to OCPP compliance or explore architectural strategies for your next project, connect with me on LinkedIn, or check out my GitHub for implementation examples!

FAQs

What is OCPP?

OCPP stands for Open Charge Point Protocol, which is a communication standard that allows for interoperability between electric vehicle chargers and management systems.

Why is security important in OCPP?

Security in OCPP is vital to protect charging networks from cyber threats and to ensure the integrity and reliability of EV charging systems.

How does hardware agnosticism benefit operators?

Hardware agnosticism allows operators to choose among various OCPP-compliant chargers without being locked into a specific manufacturer, enhancing efficiency and scalability.

What are the key features of OCPP 2.0.1?

Key features of OCPP 2.0.1 include enhanced security protocols, better error handling, and the ability to support a broader range of functionalities for charging stations.

Fixing “spawn npx ENOENT” in Windows 11 When Adding MCP Server with Node/NPX

If you’re running into the error:

spawn npx ENOENT

while configuring an MCP (Multi-Context Plugin) server on Windows 11, you’re not alone. This error commonly appears when integrating tools like @upstash/context7-mcp using Node.js environments that rely on NPX, especially in cross-platform development.

This post explains:

  • What causes the “spawn npx ENOENT” error on Windows
  • The difference between two MCP server configuration methods
  • A working fix using cmd /c
  • Why this issue is specific to Windows

The Problem: “spawn npx ENOENT”

Using this configuration in your .mcprc.json or a similar setup:

{
  "mcpServers": {
    "context7": {
      "command": "npx",
      "args": ["-y", "@upstash/context7-mcp@latest"]
    }
  }
}

will cause the following error on Windows:

spawn npx ENOENT

This indicates that Node.js tried to spawn npx but couldn’t locate it in the system’s PATH.

Root Cause: Windows vs Unix Shell Behavior

On UNIX-like systems (Mac/Linux), spawn can run shell commands like npx directly. But Windows behaves differently:

  • Windows expects a .exe file to be explicitly referenced when spawning a process.
  • npx is not a native binary executable; it requires a shell to interpret and run it.
  • Node’s child_process.spawn does not invoke a shell by default unless specifically instructed.

In the failing example, the system tries to invoke npx directly as if it were a standalone executable, which doesn’t work on Windows.

The Fix: Wrapping with cmd /c

This configuration solves the issue:

{
  "context7": {
    "command": "cmd",
    "args": [
      "/c",
      "npx",
      "-y",
      "@upstash/context7-mcp@latest"
    ]
  }
}

Explanation

  • "cmd" invokes the Windows Command Prompt.
  • "/c" tells the shell to execute the command that follows.
  • The rest of the line (npx -y @upstash/context7-mcp@latest) is interpreted and executed properly by the shell.

This ensures that npx is resolved correctly and executed within a compatible environment.

Technical Comparison

Configuration Style Works on Windows? Shell Used? Reason
"command": "npx" No No Tries to execute npx directly without shell
"command": "cmd", "args": ["/c", "npx", ...] Yes Yes Executes the command within the Windows shell, allowing proper resolution

Best Practices

When using Node.js-based CLI tools across platforms:

  • Wrap shell commands using cmd /c (Windows) or sh -c (Unix)
  • Avoid assuming that commands like npx are executable as binaries
  • Test your scripts in both Windows and Unix environments when possible

Conclusion

If you’re encountering the spawn npx ENOENT error when configuring MCP servers on Windows 11, the fix is straightforward: use cmd /c to ensure shell interpretation. This small change ensures compatibility and prevents runtime errors across different operating systems.

OCPP 1.6: The Unsung Hero Powering Your EV Charge (But It’s Getting a Major Upgrade!) – A Deep Dive

Ever pulled up to a charging station, plugged in, and watched your electric vehicle magically start to juice up? That seamless experience isn’t magic; it’s the result of a communication protocol called OCPP – the Open Charge Point Protocol. And for a significant chapter in the EV revolution, version 1.6 was the quiet workhorse behind the scenes, ensuring smooth communication between your car and the charging infrastructure. Think of it as the universal translator that made charging stations and management systems speak the same language.

Why Should You Care About OCPP 1.6? (Even If “Protocol” Sounds Like Tech Jargon)

Let’s be honest, “protocol” doesn’t exactly scream excitement. But here’s why OCPP 1.6 mattered, and why it’s worth a quick chat:

  • Charging Anywhere, Anytime: Imagine if your phone only worked with certain cell towers. Chaos, right? OCPP 1.6 prevented that in the EV world. It meant you could plug into a wider range of chargers, regardless of who made them or managed them.
  • Remote Control for Operators: Think of charging station operators as air traffic controllers for electricity. OCPP 1.6 gave them the ability to monitor, control, and update stations remotely. This meant faster fixes, better service, and even dynamic pricing adjustments.
  • Data-Driven Optimization: OCPP 1.6 allowed for the collection of valuable data on charging patterns. This data helped operators understand usage, optimize pricing, and improve the overall charging experience.

Taking a Slightly Deeper Dive (But Still Keeping it Real)

So, how did this “universal translator” actually work? It broke down charging tasks into manageable “profiles,” like departments in a well-organized company:

  • Core Profile: The Front Desk: This is where the basic interactions happened: verifying user IDs, starting and stopping charging sessions, and reporting energy usage. Messages like Authorize, BootNotification, and MeterValues handled these crucial tasks.
  • Firmware Management: The IT Department: Keeping charging stations up-to-date is vital for security and functionality. This profile allowed for remote firmware updates, ensuring stations were running the latest software.
  • Local Authorization List: The Offline Backup: Ever lose internet connection? This profile allowed charging to continue even when the network was down, using a local list of authorized users.
  • Reservation Profile: The Booking System: This allowed users to reserve charging slots, ensuring a spot was available when needed.
  • Smart Charging Profile: The Energy Optimizer: This profile enabled dynamic energy management, balancing grid load and optimizing charging schedules.
  • Remote Trigger Profile: The On-Demand Information Request: This allowed the central system to request specific data from the charging station whenever needed.

Understanding Message Structure: JSON (OCPP-J)

Since JSON is the more prevalent format in OCPP 1.6, let’s focus on that. Remember, JSON messages are structured as arrays with four key elements:

  1. MessageTypeId: Indicates the message type (2 = CALL, 3 = CALLRESULT, 4 = CALLERROR).
  2. UniqueId: Matches requests and responses.
  3. Action: The OCPP message name (e.g., “Authorize,” “MeterValues”).
  4. Payload: The message’s data in JSON object format.

Example Messages:

  1. Authorize Request (CALL):
    • [ 2, “12345”, “Authorize”, { “idTag”: “ABCDEF1234567890” } ]
  2. Authorize Response (CALLRESULT):
    • [ 3, “12345”, “Authorize”, { “idTagInfo”: { “status”: “Accepted” } } ]
  3. MeterValues Request (CALL):
    • [ 2, “67890”, “MeterValues”, { “connectorId”: 1, “transactionId”: 9876, “meterValue”: [ { “timestamp”: “2024-10-27T10:00:00Z”, “sampledValue”: [ { “value”: “1234”, “unit”: “Wh”, “measurand”: “Energy.Active.Import.Register” } ] } ] } ]
  4. StatusNotification Request (CALL):
    • [ 2, “13579”, “StatusNotification”, { “connectorId”: 1, “status”: “Charging”, “timestamp”: “2024-10-27T10:05:00Z” } ]

OCPP 1.6 Message Rundown:

Here’s a quick overview of all the messages in OCPP 1.6, organized by profile:

Core Profile:

  • Authorize: Checks user authorization.
  • BootNotification: Charge Point sends upon startup.
  • ChangeAvailability: Sets Charge Point/connector availability.
  • ChangeConfiguration: Modifies Charge Point configuration.
  • ClearCache: Clears local authorization cache.
  • DataTransfer: Vendor-specific data exchange.
  • GetConfiguration: Retrieves Charge Point configuration.
  • Heartbeat: Charge Point sends to indicate online status.
  • MeterValues: Reports energy consumption.
  • RemoteStartTransaction/RemoteStopTransaction: Remote charging control.
  • Reset: Reboots the Charge Point.
  • StartTransaction: Charge Point sends at charging start.
  • StatusNotification: Reports Charge Point status.
  • StopTransaction: Charge Point sends at charging end.
  • UnlockConnector: Remote connector release.

Firmware Management Profile:

  • GetDiagnostics: Requests diagnostic logs.
  • DiagnosticsStatusNotification: Reports diagnostic log upload status.
  • FirmwareStatusNotification: Reports firmware update status.
  • UpdateFirmware: Initiates firmware update.

Local Authorization List Management Profile:

  • GetLocalListVersion: Checks local list version.
  • SendLocalList: Updates local authorization list.

Reservation Profile:

  • ReserveNow: Requests a reservation.
  • CancelReservation: Cancels a reservation.

Smart Charging Profile:

  • SetChargingProfile: Sets charging schedules/limits.
  • ClearChargingProfile: Removes charging profiles.
  • GetCompositeSchedule: Requests active charging schedule.

Remote Trigger Profile:

  • TriggerMessage: Requests specific messages from Charge Point.

Security: The Silent Guardian (And Where We Need to Step Up)

Security is paramount in the EV world. After all, we’re dealing with sensitive data and high-voltage electricity. OCPP 1.6 incorporated:

  • TLS Encryption: The Secure Tunnel: This encrypted communication between charging stations and management systems, protecting data from unauthorized access.
  • Authentication Mechanisms: The ID Check: This verified the identity of users and devices, ensuring only authorized parties could access the charging infrastructure.
  • Secure Firmware Updates: The Software Integrity Check: This ensured that firmware updates were legitimate and not malicious software.

However, OCPP 1.6 wasn’t perfect. Some of the older security methods, like basic username/password authentication, were vulnerable to attacks. And vulnerabilities regarding how messages were handled, have been discovered.

The Future is Here: OCPP 2.0.1 and Beyond – A Necessary Evolution

While OCPP 1.6 served its purpose, the EV landscape is rapidly evolving. That’s why we’re seeing the rise of OCPP 2.0.1 and OCPP 2.1 – a major upgrade in terms of features and security:

  • Enhanced Device Management: More granular control and monitoring of charging stations.
  • Stronger Security Protocols: Advanced encryption, certificate-based authentication, and defined security profiles.
  • Advanced Smart Charging Capabilities: Integration with energy management systems, dynamic load balancing, and support for ISO 15118.
  • Native ISO 15118 Support: Enabling features like “Plug & Charge,” where EVs can automatically authenticate and charge without user intervention.
  • Bidirectional Charging (V2G/V2X): Enabling EVs to send power back to the grid, transforming them into mobile energy storage units.
  • Improved Error Handling and Data Compression: Making the system more robust and efficient.

The Human Takeaway: Embracing the Future of EV Charging

OCPP 1.6 was a crucial stepping stone in the EV revolution, laying the foundation for interoper

What is OCPP? A Complete Guide to the EV Charging Communication Protocol

As electric vehicles (EVs) become more mainstream, the infrastructure that powers them is evolving rapidly. Behind the scenes of every public EV charger is a smart communication layer that ensures chargers operate efficiently, securely, and interoperably. That communication standard is called OCPP — Open Charge Point Protocol.

In this article, we’ll break down what OCPP is, why it matters, how it works, and the different versions available today. Whether you’re an EV driver, charging network operator, or tech enthusiast, this guide will help you understand how OCPP is shaping the future of electric mobility.

🔌 What is OCPP?

OCPP (Open Charge Point Protocol) is an application protocol used to enable communication between Electric Vehicle Supply Equipment (EVSE)—commonly known as EV chargers—and a Central Management System (CMS), often referred to as a Charge Point Operator (CPO) backend.

It is vendor-neutral and open-source, developed by the Open Charge Alliance (OCA) to standardize how EV chargers and management systems talk to each other.

Think of OCPP as the universal “language” between the charging station and the software that manages it.

⚙️ How OCPP Works

OCPP defines a set of WebSocket-based or SOAP-based messages that are exchanged between the client (charge point) and the server (backend system).

For example:

  • When a driver plugs in their EV, the charger sends a StartTransaction message to the backend.
  • The backend authenticates the session and sends a StartTransactionConfirmation.
  • Once charging ends, the charger sends a StopTransaction message.

Other key message types include:

  • Heartbeat: to ensure the charger is online
  • StatusNotification: to report charger availability
  • BootNotification: sent when the charger powers up
  • MeterValues: for usage data and billing
  • FirmwareUpdate, Diagnostics, and RemoteStart/Stop commands

These interactions enable remote control, monitoring, diagnostics, and software updates — all of which are essential for smart charging infrastructure.

🚀 Why is OCPP Important?

  • Interoperability: OCPP allows chargers from different manufacturers to connect to any compliant backend, reducing vendor lock-in.
  • Scalability: Operators can manage thousands of chargers efficiently using a single system.
  • Smart Charging: OCPP supports load balancing, grid integration, and energy optimization.
  • Security: Latest versions support enhanced encryption, authentication, and access control mechanisms.

OCPP is especially important for public EV charging networks, fleet operators, municipalities, and utility companies that require flexibility and operational efficiency.

🔢 OCPP Versions Explained

Over the years, OCPP has evolved to meet the growing demands of EV infrastructure. Here’s a look at its major versions:

OCPP 1.2 (2009)

  • The first version
  • Limited functionality
  • Largely outdated and no longer used

OCPP 1.5

  • Improved stability
  • Better message structure
  • Still lacks advanced features

OCPP 1.6 (2015)

  • Most widely deployed version
  • Supports JSON over WebSocket and SOAP
  • Adds:
    • Remote Start/Stop
    • Smart Charging (Load Profiles)
    • Firmware Management
    • Diagnostics
  • Still supported by most major networks today

OCPP 2.0 (2018)

  • Major overhaul of the protocol
  • Adds:
    • Device Management
    • Security Profiles
    • ISO 15118 integration (Plug & Charge)
    • Improved Smart Charging
    • Better data modeling

OCPP 2.0.1 (2020)

  • The latest and stable version
  • Focused on bug fixes and practical enhancements from real-world implementations
  • Growing adoption in next-generation networks

📝 Note: OCPP 2.x is not backward compatible with 1.6, but many platforms support dual-stack operation.

🛠️ Technical Architecture Overview

A typical OCPP-based EV charging setup consists of:

  1. Charge Point (Client):
    • Hardware installed at EV charging stations
    • Acts as the OCPP client
    • Initiates communication
  2. Central System (Server):
    • Backend system that processes OCPP messages
    • Manages user sessions, pricing, diagnostics, and energy usage
  3. Communication Layer:
    • Typically uses WebSockets over TLS for secure, real-time, full-duplex communication
    • Some older implementations use SOAP over HTTP
  4. Optional Add-ons:
    • Token authentication (RFID, app-based)
    • OCPI/OSCP/ISO 15118 integration for roaming and advanced smart grid features

🔒 Security in OCPP

Starting with OCPP 2.0, the protocol includes support for secure communication profiles, including:

  • TLS Encryption
  • Client-side and server-side certificates
  • Secure firmware updates
  • Signed metering and transaction data

These features make OCPP ready for enterprise-scale, mission-critical deployments.

🌍 Real-World Use Cases

  • Public Charging Networks: Roaming across different charger brands
  • Fleet Management: Real-time diagnostics and energy consumption tracking
  • Retail Sites & Fuel Stations: Revenue tracking and load optimization
  • Smart Cities & Utilities: Demand response and grid integration

📈 Final Thoughts

OCPP is the backbone of modern EV charging infrastructure. As the electric vehicle ecosystem expands, having a universal, open, and future-ready protocol like OCPP ensures that EV charging remains reliable, scalable, and secure.

Whether you’re deploying 5 chargers in a parking lot or 5,000 across a city, OCPP gives you the flexibility to choose the hardware and software that suit your needs — all while ensuring interoperability with the rest of the EV ecosystem.

Want to learn more about OCPP, EV charging, or smart infrastructure? Follow this blog for future deep-dives, comparisons, and real-world implementation guides!

Scraping JSON-LD from a Next.js Site with Crawl4AI: My Debugging Journey

Scraping data from modern websites can feel like a puzzle, especially when they’re built with Next.js and all that fancy JavaScript magic. Recently, I needed to pull some product info—like names, prices, and a few extra details—from an e-commerce page that was giving me a headache. The site (let’s just call it https://shop.example.com/products/[hidden-stuff]) used JSON-LD tucked inside a <script> tag, but my first attempts with Crawl4AI came up empty. Here’s how I cracked it, step by step, and got the data I wanted.

The Headache: Empty Results from a Next.js Page

I was trying to grab details from a product page—think stuff like the item name, description, member vs. non-member prices, and some category info. The JSON-LD looked something like this (I’ve swapped out the real details for a fake example):

{
  "@context": "https://schema.org",
  "@type": "Product",
  "name": "Beginner’s Guide to Coffee Roasting",
  "description": "Learn the basics of roasting your own coffee beans at home. Recorded live last summer.",
  "provider": {
    "@type": "Organization",
    "name": "Bean Enthusiast Co."
  },
  "offers": [
    {"@type": "Offer", "price": 49.99, "priceCurrency": "USD"},
    {"@type": "Offer", "price": 59.99, "priceCurrency": "USD"}
  ],
  "skillLevel": "Beginner",
  "hasWorkshop": [
    {
      "@type": "WorkshopInstance",
      "deliveryMethod": "Online",
      "workshopSchedule": {"startDate": "2024-08-15"}
    }
  ]
}

My goal was to extract this, label the cheaper price as “member” and the higher one as “non-member,” and snag extras like skillLevel and deliveryMethod. Simple, right? Nope. My first stab at it with Crawl4AI gave me nothing—just an empty [].

What Went Wrong: Next.js Threw Me a Curveball

Next.js loves doing things dynamically, which means the JSON-LD I saw in my browser’s dev tools wasn’t always in the raw HTML Crawl4AI fetched. I started with this basic setup:

from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy

schema = {
    "name": "Product Schema",
    "baseSelector": "script[type='application/ld+json']",
    "fields": [{"name": "json_ld_content", "selector": "script[type='application/ld+json']", "type": "text"}]
}

async def extract_data(url):
    async with AsyncWebCrawler() as crawler:
        result = await crawler.arun(url=url, extraction_strategy=JsonCssExtractionStrategy(schema))
        extracted_data = json.loads(result.extracted_content)
        print(extracted_data)

# Output: []

Empty. Zilch. I dug into the debug output and saw the JSON-LD was in result.html, but result.extracted_content was blank. Turns out, Next.js was injecting that <script> tag after the page loaded, and Crawl4AI wasn’t catching it without some extra nudging.

How I Fixed It: A Workaround That Worked

After banging my head against the wall, I figured out I needed to make Crawl4AI wait for the JavaScript to do its thing and then grab the JSON-LD myself from the HTML. Here’s the code that finally worked:

import json
import asyncio
from crawl4ai import AsyncWebCrawler

async def extract_product_schema(url):
    async with AsyncWebCrawler(verbose=True, user_agent="Mozilla/5.0") as crawler:
        print(f"Checking out: {url}")
        result = await crawler.arun(
            url=url,
            js_code=[
                "window.scrollTo(0, document.body.scrollHeight);",  # Wake up the page
                "await new Promise(resolve => setTimeout(resolve, 5000));"  # Give it 5 seconds
            ],
            bypass_cache=True,
            timeout=30
        )

        if not result.success:
            print(f"Oops, something broke: {result.error_message}")
            return None

        # Digging into the HTML myself
        html = result.html
        start_marker = '<script type="application/ld+json">'
        end_marker = '</script>'
        start_idx = html.find(start_marker) + len(start_marker)
        end_idx = html.find(end_marker, start_idx)

        if start_idx == -1 or end_idx == -1:
            print("Couldn’t find the JSON-LD.")
            return None

        json_ld_raw = html[start_idx:end_idx].strip()
        json_ld = json.loads(json_ld_raw)

        # Sorting out the product details
        if json_ld.get("@type") == "Product":
            offers = sorted(
                [{"price": o.get("price"), "priceCurrency": o.get("priceCurrency")} for o in json_ld.get("offers", [])],
                key=lambda x: x["price"]
            )
            workshop_instances = json_ld.get("hasWorkshop", [])
            schedule = workshop_instances[0].get("workshopSchedule", {}) if workshop_instances else {}
            
            product_info = {
                "name": json_ld.get("name"),
                "description": json_ld.get("description"),
                "providerName": json_ld.get("provider", {}).get("name"),
                "memberPrice": offers[0] if offers else None,
                "nonMemberPrice": offers[-1] if offers else None,
                "skillLevel": json_ld.get("skillLevel"),
                "deliveryMethod": workshop_instances[0].get("deliveryMethod") if workshop_instances else None,
                "startDate": schedule.get("startDate")
            }
            return product_info
        print("No product data here.")
        return None

async def main():
    url = "https://shop.example.com/products/[hidden-stuff]"
    product_data = await extract_product_schema(url)
    if product_data:
        print("Here’s what I got:")
        print(json.dumps(product_data, indent=2))

if __name__ == "__main__":
    asyncio.run(main())

What I Got Out of It

{
  "name": "Beginner’s Guide to Coffee Roasting",
  "description": "Learn the basics of roasting your own coffee beans at home. Recorded live last summer.",
  "providerName": "Bean Enthusiast Co.",
  "memberPrice": {
    "price": 49.99,
    "priceCurrency": "USD"
  },
  "nonMemberPrice": {
    "price": 59.99,
    "priceCurrency": "USD"
  },
  "skillLevel": "Beginner",
  "deliveryMethod": "Online",
  "startDate": "2024-08-15"
}

How I Made It Work

Waiting for JavaScript: I told Crawl4AI to scroll and hang out for 5 seconds with js_code. That gave Next.js time to load everything up.DIY Parsing: The built-in extractor wasn’t cutting it, so I searched the HTML for the <script> tag and pulled the JSON-LD out myself.Price Tags: Sorted the prices and called the lowest “member” and the highest “non-member”—seemed like a safe bet for this site.

What I Learned Along the Way

  • Next.js is Tricky: It’s not just about the HTML you get—it’s about what shows up after the JavaScript runs. Timing is everything.
  • Sometimes You Gotta Get Hands-On: When the fancy tools didn’t work, digging into the raw HTML saved me.
  • Debugging Pays Off: Printing out the HTML and extractor output showed me exactly where things were going wrong.

Azure Service Bus Peek-Lock: A Comprehensive Guide to Reliable Message Processing

Working with Peek-Lock in Azure Service Bus: A Practical Guide

In many distributed systems, reliable message handling is a top priority. When I first started building an order processing application, I learned very quickly that losing even one message could cause major headaches. That’s exactly where Azure Service Bus and its Peek-Lock mode came to the rescue. By using Peek-Lock, you don’t remove the message from the queue as soon as you receive it. Instead, you lock it for a certain period, process it, and then decide what to do next—complete, abandon, dead-letter, or defer. Here’s how it all fits together.

Why Peek-Lock Matters

Peek-Lock is one of the two receiving modes offered by Azure Service Bus. The other is Receive and Delete, which automatically removes messages from the queue upon receipt. While that might be fine for scenarios where occasional message loss is acceptable, many real-world applications need stronger guarantees.

  1. Reliability: With Peek-Lock, if processing fails, you can abandon the message. This makes it visible again for another attempt, reducing the risk of data loss.
  2. Explicit Control: You decide when a message is removed. After you successfully handle the message (e.g., update a database or complete a transaction), you explicitly mark it as complete.
  3. Error Handling: If the same message repeatedly fails, you can dead-letter it for investigation. This helps avoid getting stuck in an endless processing loop.

What Happens If the Lock Expires?

By default, the lock is held for a certain period (often 30 seconds, which can be adjusted). If your code doesn’t complete or abandon the message before the lock expires, the message becomes visible to other receivers. To handle potentially lengthy processes, you can renew the lock programmatically, although that introduces additional complexity. The key takeaway is that you should design your service to either complete or abandon messages quickly, or renew the lock if more time is truly necessary.

Default Peek-Lock in Azure Functions

When you use Azure Service Bus triggers in Azure Functions, you generally don’t need to configure or manage the Peek-Lock behavior yourself. According to the official documentation, the default behavior in Azure Functions is already set to Peek-Lock. This means you can focus on your function’s core logic without explicitly dealing with message locking or completion in most scenarios.

Don’t Swallow Exceptions

One important detail to note is that in Azure Functions, any unhandled exceptions in your function code will signal to the runtime that message processing failed. This prevents the function from automatically completing the message, allowing the Service Bus to retry later. However, if you wrap your logic in a try/catch block and inadvertently swallow the exception—meaning you catch the error without rethrowing or handling it properly—you might unintentionally signal success. That would lead to the message being completed even though a downstream service might have failed.

Recommendation:

  • If you must use a try/catch, make sure errors are re-thrown or handled in a way that indicates failure if the message truly hasn’t been processed successfully. Otherwise, you’ll end up completing the message and losing valuable information about the error.

Typical Use Cases

  1. Financial Transactions: Losing a message that represents a monetary transaction is not an option. Peek-Lock ensures messages remain available until your code confirms it was successfully processed.
  2. Critical Notifications: If you have an alerting system that notifies users about important events, you don’t want those notifications disappearing in case of a crash.
  3. Order Processing: In ecommerce or supply chain scenarios, every order message has to be accounted for. Peek-Lock helps avoid partial or lost orders due to transient errors.

Example in C#

Here’s a short snippet that demonstrates how you can receive messages in Peek-Lock mode using the Azure.Messaging.ServiceBus library:

using System;
using System.Threading.Tasks;
using Azure.Messaging.ServiceBus;

public class PeekLockExample
{
    private const string ConnectionString = "<YOUR_SERVICE_BUS_CONNECTION_STRING>";
    private const string QueueName = "<YOUR_QUEUE_NAME>";

    public async Task RunPeekLockSample()
    {
        // Create a Service Bus client
        var client = new ServiceBusClient(ConnectionString);

        // Create a receiver in Peek-Lock mode
        var receiver = client.CreateReceiver(
            QueueName, 
            new ServiceBusReceiverOptions 
            { 
                ReceiveMode = ServiceBusReceiveMode.PeekLock 
            }
        );

        try
        {
            // Attempt to receive a single message
            ServiceBusReceivedMessage message = await receiver.ReceiveMessageAsync(TimeSpan.FromSeconds(10));

            if (message != null)
            {
                // Process the message
                string body = message.Body.ToString();
                Console.WriteLine($"Processing message: {body}");

                // If processing is successful, complete the message
                await receiver.CompleteMessageAsync(message);
                Console.WriteLine("Message completed and removed from the queue.");
            }
            else
            {
                Console.WriteLine("No messages were available to receive.");
            }
        }
        catch (Exception ex)
        {
            Console.WriteLine($"An error occurred: {ex.Message}");
            // Optionally handle or log the exception
        }
        finally
        {
            // Clean up resources
            await receiver.CloseAsync();
            await client.DisposeAsync();
        }
    }
}

What’s Happening Here?

  • We create a ServiceBusClient to connect to Azure Service Bus.
  • We specify ServiceBusReceiveMode.PeekLock when creating the receiver.
  • The code then attempts to receive one message and processes it.
  • If everything goes smoothly, we call CompleteMessageAsync to remove it from the queue. If something goes wrong, the message remains locked until the lock expires or until we choose to abandon it.

Final Thoughts

Peek-Lock strikes a balance between reliability and performance. It ensures you won’t lose critical data while giving you the flexibility to handle errors gracefully. Whether you’re dealing with financial operations, critical user notifications, or any scenario where each message must be processed correctly, Peek-Lock is an indispensable tool in your Azure Service Bus arsenal.

In Azure Functions, you get this benefit without having to manage the locking details, so long as you don’t accidentally swallow your exceptions. For other applications, adopting Peek-Lock might demand a bit more coding, but it’s well worth it if you need guaranteed, at-least-once message delivery.

Whether you’re building a simple queue-based workflow or a complex event-driven system, Peek-Lock ensures your messages remain safe until you decide they’re processed successfully. It’s a powerful approach that balances performance with reliability, which is why it’s a must-know feature for developers relying on Azure Service Bus.

Microsoft Azure Service Bus Exception: “Cannot allocate more handles. The maximum number of handles is 4999”

When working with Microsoft Azure Service Bus, you may encounter the following exception:

“Cannot allocate more handles. The maximum number of handles is 4999.”

This issue typically arises due to improper dependency injection scope configuration for the ServiceBusClient. In most cases, the ServiceBusClient is registered as Scoped instead of Singleton, leading to the creation of multiple instances during the application lifetime, which exhausts the available handles.

In this blog post, we’ll explore the root cause and demonstrate how to fix this issue by using proper dependency injection in .NET applications.

Understanding the Problem

Scoped vs. Singleton

  1. Scoped: A new instance of the service is created per request.
  2. Singleton: A single instance of the service is shared across the entire application lifetime.

The ServiceBusClient is designed to be a heavyweight object that maintains connections and manages resources efficiently. Hence, it should be registered as a Singleton to avoid excessive resource allocation and ensure optimal performance.

Before Fix: Using Scoped Registration

Here’s an example of the problematic configuration:

public void ConfigureServices(IServiceCollection services)
{
    services.AddScoped(serviceProvider =>
    {
        string connectionString = Configuration.GetConnectionString("ServiceBus");
        return new ServiceBusClient(connectionString);
    });

    services.AddScoped<IMessageProcessor, MessageProcessor>();
}

In this configuration:

  • A new instance of ServiceBusClient is created for each HTTP request or scoped context.
  • This quickly leads to resource exhaustion, causing the “Cannot allocate more handles” error.

Solution: Switching to Singleton

To fix this, register the ServiceBusClient as a Singleton, ensuring a single instance is shared across the application lifetime:

public void ConfigureServices(IServiceCollection services)
{
    services.AddSingleton(serviceProvider =>
    {
        string connectionString = Configuration.GetConnectionString("ServiceBus");
        return new ServiceBusClient(connectionString);
    });

    services.AddScoped<IMessageProcessor, MessageProcessor>();
}

In this configuration:

  • A single instance of ServiceBusClient is created and reused for all requests.
  • Resource usage is optimized, and the exception is avoided.

Sample Code: Before and After

Before Fix (Scoped Registration)

public interface IMessageProcessor
{
    Task ProcessMessageAsync();
}

public class MessageProcessor : IMessageProcessor
{
    private readonly ServiceBusClient _client;

    public MessageProcessor(ServiceBusClient client)
    {
        _client = client;
    }

    public async Task ProcessMessageAsync()
    {
        ServiceBusReceiver receiver = _client.CreateReceiver("queue-name");
        var message = await receiver.ReceiveMessageAsync();
        Console.WriteLine($"Received message: {message.Body}");
        await receiver.CompleteMessageAsync(message);
    }
}

After Fix (Singleton Registration)

public void ConfigureServices(IServiceCollection services)
{
    // Singleton registration for ServiceBusClient
    services.AddSingleton(serviceProvider =>
    {
        string connectionString = Configuration.GetConnectionString("ServiceBus");
        return new ServiceBusClient(connectionString);
    });

    services.AddScoped<IMessageProcessor, MessageProcessor>();
}

public class MessageProcessor : IMessageProcessor
{
    private readonly ServiceBusClient _client;

    public MessageProcessor(ServiceBusClient client)
    {
        _client = client;
    }

    public async Task ProcessMessageAsync()
    {
        ServiceBusReceiver receiver = _client.CreateReceiver("queue-name");
        var message = await receiver.ReceiveMessageAsync();
        Console.WriteLine($"Received message: {message.Body}");
        await receiver.CompleteMessageAsync(message);
    }
}

Key Takeaways

  1. Always use Singleton scope for ServiceBusClient to optimize resource usage.
  2. Avoid using Scoped or Transient scope for long-lived, resource-heavy objects.
  3. Test your application under load to ensure no resource leakage occurs.

Resolving the “Certificate Chain Was Issued by an Authority That Is Not Trusted” Error During Sitecore Installation on Windows 11

When installing Sitecore on Windows 11, you might encounter the following error:

A connection was successfully established with the server, but then an error occurred during the login process. (provider: SSL Provider, error: 0 - The certificate chain was issued by an authority that is not trusted.)

This issue arises due to a recent security enforcement rolled out by Microsoft. Windows 11 now requires SQL Server connections to use encrypted connections by default. Some of the PowerShell scripts used during the Sitecore installation process are not configured to handle this change, resulting in the above error.

In this blog post, we’ll dive into the root cause of the issue and walk you through the steps to resolve it.


Understanding the Root Cause

The error is triggered because the PowerShell scripts used in the Sitecore installation attempt to connect to the SQL Server without explicitly trusting the server’s SSL certificate. With the new security enforcement, connections to the SQL Server default to encryption, but without a trusted certificate, the connection fails.

This is particularly relevant when using self-signed certificates or development environments where the SQL Server’s certificate authority is not inherently trusted.

How to Fix the Error

The solution is to explicitly configure the Sitecore installation scripts to trust the SQL Server’s certificate by setting the TrustServerCertificate variable to true. This needs to be done in two specific JSON files used during the installation process:

  1. sitecore-xp0.json
  2. xconnect-xp0.json

Steps to Resolve

  1. Locate the JSON Files:
    • Navigate to the folder where you extracted the Sitecore installation files.
    • Open the ConfigurationFiles directory (or equivalent, depending on your setup).
    • Find the sitecore-xp0.json and xconnect-xp0.json files.
  2. Modify the JSON Files:
    • Open sitecore-xp0.json in a text editor (e.g., Visual Studio Code or Notepad++).
    • Look for [variable('Sql.Credential')] in the JSON structure.
    • Add the following key-value pair:"TrustServerCertificate": true
    • Example:
"CreateShardApplicationDatabaseServerLoginInvokeSqlCmd": {
    "Description": "Create Collection Shard Database Server Login.",
    "Type": "InvokeSqlcmd",
    "Params": {
        "ServerInstance": "[parameter('SqlServer')]",
        "Credential": "[variable('Sql.Credential')]",
        "TrustServerCertificate": true,
        "InputFile": "[variable('Sharding.SqlCmd.Path.CreateShardApplicationDatabaseServerLogin')]",
        "Variable": [
            "[concat('UserName=',variable('SqlCollection.User'))]",
            "[concat('Password=',variable('SqlCollection.Password'))]"
        ]
    },
    "Skip": "[or(parameter('SkipDatabaseInstallation'),parameter('Update'))]"
},
"CreateShardManagerApplicationDatabaseUserInvokeSqlCmd": {
    "Description": "Create Collection Shard Manager Database User.",
    "Type": "InvokeSqlcmd",
    "Params": {
        "ServerInstance": "[parameter('SqlServer')]",
        "Credential": "[variable('Sql.Credential')]",
        "TrustServerCertificate": true,
        "Database": "[variable('Sql.Database.ShardMapManager')]",
        "InputFile": "[variable('Sharding.SqlCmd.Path.CreateShardManagerApplicationDatabaseUser')]",
        "Variable": [
            "[concat('UserName=',variable('SqlCollection.User'))]",
            "[concat('Password=',variable('SqlCollection.Password'))]"
        ]
    },
    "Skip": "[or(parameter('SkipDatabaseInstallation'),parameter('Update'))]"
}
  • Repeat the same modification for the xconnect-xp0.json file.
  • Save and Retry Installation:
    • Save both JSON files after making the changes.
  • Re-run the Sitecore installation PowerShell script.

    Additional Notes

    • Security Considerations: Setting TrustServerCertificate to true is a quick fix for development environments. However, for production environments, it’s recommended to install a certificate from a trusted Certificate Authority (CA) on the SQL Server to ensure secure and trusted communication.
    • Error Still Persists?: Double-check the JSON modifications and ensure the SQL Server is accessible from your machine. If issues persist, verify firewall settings and SQL Server configuration.

    Conclusion

    The “Certificate chain was issued by an authority that is not trusted” error during Sitecore installation is a direct result of Microsoft’s enhanced security measures in Windows 11. By updating the Sitecore configuration files to include the TrustServerCertificate setting, you can bypass this error and complete the installation successfully.

    For a smoother experience in production environments, consider implementing a properly signed SSL certificate for your SQL Server.

    If you’ve encountered similar issues or have additional tips, feel free to share them in the comments below!

    Sending Apple Push Notification for Live Activities Using .NET

    In the evolving world of app development, ensuring real-time engagement with users is crucial. Apple Push Notification Service (APNs) enables developers to send notifications to iOS devices, and with the introduction of Live Activities in iOS, keeping users updated about ongoing tasks is easier than ever. This guide demonstrates how to use .NET to send Live Activity push notifications using APNs.

    Prerequisites

    Before diving into the code, ensure you have the following:

    1. Apple Developer Account with access to APNs.
    2. P8 Certificate downloaded from the Apple Developer Portal.
    3. Your Team ID, Key ID, and Bundle ID of the iOS application.
    4. .NET SDK installed on your system.

    Overview of the Code

    The provided ApnsService class encapsulates the logic to interact with APNs for sending push notifications, including Live Activities. Let’s break it down step-by-step:

    1. Initializing APNs Service

    The constructor sets up the base URI for APNs:

    • Use https://api.push.apple.com for production.
    • Use https://api.development.push.apple.com for the development environment.
    _httpClient = new HttpClient { BaseAddress = new Uri("https://api.development.push.apple.com:443") };

    2. Generating the JWT Token

    APNs requires a JWT token for authentication. This token is generated using:

    • Team ID: Unique identifier for your Apple Developer account.
    • Key ID: Associated with the P8 certificate.
    • ES256 Algorithm: Uses the private key in the P8 certificate to sign the token.
    private string GetProviderToken()
    {
        double epochNow = (int)DateTime.UtcNow.Subtract(new DateTime(1970, 1, 1)).TotalSeconds;
        Dictionary<string, object> payload = new Dictionary<string, object>
        {
            { "iss", _teamId },
            { "iat", epochNow }
        };
        var extraHeaders = new Dictionary<string, object>
        {
            { "kid", _keyId },
            { "alg", "ES256" }
        };
    
        CngKey privateKey = GetPrivateKey();
    
        return JWT.Encode(payload, privateKey, JwsAlgorithm.ES256, extraHeaders);
    }

    3. Loading the Private Key

    The private key is extracted from the .p8 file using BouncyCastle.

    private CngKey GetPrivateKey()
    {
        using (var reader = File.OpenText(_p8CertificateFileLocation))
        {
            ECPrivateKeyParameters ecPrivateKeyParameters = (ECPrivateKeyParameters)new PemReader(reader).ReadObject();
            var x = ecPrivateKeyParameters.Parameters.G.AffineXCoord.GetEncoded();
            var y = ecPrivateKeyParameters.Parameters.G.AffineYCoord.GetEncoded();
            var d = ecPrivateKeyParameters.D.ToByteArrayUnsigned();
    
            return EccKey.New(x, y, d);
        }
    }

    4. Sending the Notification

    The SendApnsNotificationAsync method handles:

    • Building the request with headers and payload.
    • Adding apns-push-type as liveactivity for Live Activity notifications.
    • Adding a unique topic for Live Activities by appending .push-type.liveactivity to the Bundle ID.
    public async Task SendApnsNotificationAsync<T>(string deviceToken, string pushType, T payload) where T : class
        {
            var jwtToken = GetProviderToken();
            var jsonPayload = JsonSerializer.Serialize(payload);
            // Prepare HTTP request
            var request = new HttpRequestMessage(HttpMethod.Post, $"/3/device/{deviceToken}")
            {
                Content = new StringContent(jsonPayload, Encoding.UTF8, "application/json")
            };
            request.Headers.Add("authorization", $"Bearer {jwtToken}");
            request.Headers.Add("apns-push-type", pushType);
            if (pushType == "liveactivity")
            {
                request.Headers.Add("apns-topic", _bundleId + ".push-type.liveactivity");
                request.Headers.Add("apns-priority", "10");
            }
            else
            {
                request.Headers.Add("apns-topic", _bundleId);
            }
            request.Version = new Version(2, 0);
            // Send the request
            var response = await _httpClient.SendAsync(request);
            if (response.IsSuccessStatusCode)
            {
                Console.WriteLine("Push notification sent successfully!");
            }
            else
            {
                var responseBody = await response.Content.ReadAsStringAsync();
                Console.WriteLine($"Failed to send push notification: {response.StatusCode} - {responseBody}");
            }
        }

    Sample Usage

    Here’s how you can use the ApnsService class to send a Live Activity notification:

    var apnsService = new ApnsService();
     // Example device token (replace with a real one)
     var pushDeviceToken = "808f63xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
     // Create the payload for the Live Activity
     var notificationPayload = new PushNotification
     {
         Aps = new Aps
         {
             Timestamp = DateTimeOffset.UtcNow.ToUnixTimeSeconds(),
             Event = "update",
             ContentState = new ContentState
             {
                 Status = "Charging",
                 ChargeAmount = "65 Kw",
                 DollarAmount = "$11.80",
                 timeDuration = "00:28",
                 Percentage = 80
             },
         }
     };
     await apnsService.SendApnsNotificationAsync(pushDeviceToken, "liveactivity", notificationPayload);

    Key Points to Remember

    1. JWT Token Validity: Tokens expire after 1 hour. Ensure you regenerate tokens periodically.
    2. APNs Endpoint: Use the correct environment (production or development) based on your app stage.
    3. Error Handling: Handle HTTP responses carefully. Common issues include invalid tokens or expired certificates.

    Debugging Tips

    • Ensure your device token is correct and valid.
    • Double-check your .p8 file, Team ID, Key ID, and Bundle ID.
    • Use tools like Postman to test your APNs requests independently.

    Conclusion

    Sending Live Activity push notifications using .NET involves integrating APNs with proper authentication and payload setup. The ApnsService class demonstrated here provides a robust starting point for developers looking to enhance user engagement with real-time updates.🚀

    Page 1 of 19

    Powered by WordPress & Theme by Anders Norén