Embedded DevOps: CI/CD, automated tests and firmware deployment

Embedded DevOps: firmware CI/CD pipelines, automated tests, static analysis, hardware-in-the-loop and OTA deployment. Practical guide by AESTECHNO Montpellier.

Embedded DevOps applies Continuous Integration (CI), Continuous Delivery (CD), automated tests and reproducible deployment to firmware: build under 5 min, unit tests under 30 s, Hardware-in-the-Loop (HIL) under 10 min. At AESTECHNO, based in Montpellier, we deploy this pipeline on our Zephyr, FreeRTOS and embedded Linux projects with GitLab CI, pytest-embedded and MCUboot signing.

Key takeaways

  • Target pipeline timings: lint and static analysis under 30 s, host unit tests under 30 s for ~500 tests, cross-compiled firmware build under 5 min, full HIL suite under 10 min, CycloneDX Software Bill of Materials (SBoM) generation under 15 s.
  • Testing ladder: Software in the Loop (SIL) on native_posix for logic, Hardware-in-the-Loop (HIL) on real Cortex-M or Cortex-A boards for BLE, UART and GPIO paths, Renode or QEMU for virtual boards in CI.
  • Supply-chain compliance: according to the European Commission and per Owasp guidance, the EU Cyber Resilience Act 2024/2847 will require SBoM delivery from 2027; CycloneDX 1.5 and SPDX 2.3 are the two ISO-aligned formats, combined with Supply-chain Levels for Software Artifacts (SLSA) v1.0 and ISO/IEC 27001 / IEC 62443 controls (see owasp.org, commission.europa.eu).
  • DORA elite thresholds: as noted by Google and per Puppet benchmarks (the Dev Ops Research and Assessment, DORA, 2024 report), elite firmware teams reach Deployment Frequency (DF) on-demand, Lead Time for Changes under 1 day, Mean Time To Recovery (MTTR) under 1 hour, and a change failure rate under 15 %.
  • Tooling stack we ship: GitLab CI self-hosted runners, Docker images, Yocto Project for embedded Linux, Zephyr + west for Arm Cortex-M, MCUboot-signed OTA, CycloneDX SBoM archived on every release tag.

Contents

Why is CI/CD essential for embedded firmware?

Continuous integration and continuous deployment (CI/CD) applied to firmware refers to the set of automation practices that guarantee every code change is built, tested and validated automatically before being merged into the main branch. In embedded, this discipline is especially critical because bugs shipped to production are often impossible to fix without physical intervention on the product.

Without CI/CD, a typical firmware project suffers from recurring problems:

  • Silent regressions: a developer changes the SPI driver and breaks the BLE protocol without noticing, because no one tests the whole stack on every commit
  • Non-reproducible builds: firmware compiled on engineer A’s workstation does not produce the same binary as engineer B’s, due to different toolchain versions
  • Late integration: two developers work in parallel for weeks, then discover major conflicts at merge time
  • Manual release: the release process involves a paper checklist, copy-pasted commands, and a silent prayer

CI/CD solves these problems by automating what should be automated: compilation runs on every push, tests execute without human intervention, and the release binary is produced by the pipeline, not by a developer’s machine. For embedded projects where the test and validation strategy is already critical, CI/CD automation is the natural extension.

Architecture of an embedded CI/CD pipeline

An embedded CI/CD pipeline is a sequence of stages that filters defects class by class before code reaches hardware validation. Each stage is a gate. Contrary to web CI where everything runs in containers, an embedded pipeline must coordinate with physical boards and probes, which changes the rules. Unlike a web build, a firmware build fails silently on flash timing, not on compile errors. The funnel ensures only clean, compilable and tested code reaches HIL, where every iteration costs runner time and mobilises physical resources. A reference architecture, according to Gitlab and per Jenkins documentation, caps at seven stages before pipeline latency breaks the developer feedback loop.

Here is the architecture we recommend, implemented in GitLab CI:

stages:
  - lint
  - static-analysis
  - unit-test
  - build
  - flash
  - hil-test
  - release

Each stage has a precise role:

  1. Lint: formatting checks (clang-format, uncrustify), naming conventions, header consistency. Fast, under 30 seconds.
  2. Static analysis: source-code analysis to find potential bugs, memory leaks, buffer overruns. A few minutes.
  3. Unit test: run unit tests on the host (x86), without hardware. Validates business logic, parsers, state machines.
  4. Build: cross-compile for the target (ARM Cortex-M, RISC-V, etc.). Generate debug and release binaries.
  5. Flash: program the firmware onto the target board connected to the CI runner via J-Link or ST-Link.
  6. HIL test: hardware-in-the-loop tests on real hardware, UART, BLE, GPIO, sensors.
  7. Release: generate the signed artifact, publish to the registry, notify the team.

The first three stages need no hardware and run on any CI runner. They are the minimum safety net any firmware project should have, even without investing in an automated test bench. Flash and HIL stages need a dedicated runner with hardware attached.

Target timings on a healthy pipeline:

  • Lint + static analysis: under 30 s
  • Host unit tests (pytest-embedded, twister): under 30 s for ~500 tests
  • Cross-compiled firmware build for ARM Cortex-M (arm-none-eabi-gcc, -O2): under 5 min
  • Flash via J-Link at 4 Mbps SWD on a 256 KB firmware: under 10 s
  • Full HIL suite (BLE scan, UART loopback, GPIO toggle at 3.3 V, sensors -40 to 85 °C): under 10 min
  • CycloneDX SBOM generation: under 15 s

Beyond these targets, the developer feedback loop becomes too slow and discipline degrades, developers start working around CI, which negates its purpose. On a recent project we observed that pushing median build time from 4 min to 9 min halved the number of pull requests opened per week, a measurable productivity tax.

Static analysis: catch bugs before they reach hardware

Static analysis is the examination of source code without executing it, using specialised tools that detect null pointer dereferences, buffer overruns, uninitialised variables, resource leaks, and violations of MISRA or CERT C coding rules. It is the first automated line of defence against the most common firmware defects, before any HIL stage is triggered.

The tools we integrate into our pipelines:

cppcheck, the essential open-source tool

Cppcheck is a C/C++ static analyser that excels at finding errors the compiler does not flag: memory leaks, use-after-free, array overruns, and suspicious logical conditions. It integrates natively into GitLab CI and produces SARIF or XML reports:

static-analysis:
  stage: static-analysis
  script:
    - cppcheck --enable=all --error-exitcode=1
      --suppress=missingIncludeSystem
      --xml --xml-version=2 src/ 2> cppcheck-report.xml
  artifacts:
    reports:
      codequality: cppcheck-report.xml

PC-lint Plus, MISRA rigor

PC-lint Plus is a commercial tool that goes further than cppcheck by exhaustively applying the MISRA C:2012 and CERT C rule sets. For projects subject to standards-based requirements (automotive, medical, railway), it is often a mandatory step. CI integration requires a licence, but the return on investment is fast: bugs caught by static analysis cost orders of magnitude less than those discovered in validation or in production.

Coverity, deep analysis

Coverity (Synopsys) is an industrial-grade static analysis tool that detects complex defects through inter-procedural data-flow analysis. It identifies bugs that lighter tools miss: race conditions, potential deadlocks, and execution paths leading to undefined states. The cost is significant, but justified for critical products where a field bug can have severe consequences.

Our recommendation: start with cppcheck (free, immediate), then add PC-lint or Coverity depending on the project’s regulatory requirements. Tooling choice, according to Synopsys and to the Carnegie Mellon engineering team, catches 40 to 70 % of firmware defects before compilation, a cost-to-fix ratio that is at least 10x lower than defects caught at HIL or field stage. This is echoed, as noted by Canonical and as cited by Espressif in their developer guidelines, under IEC 62443 and ISO/IEC 27001 safety-critical controls.

Host-side unit tests: validate without hardware

Host-side unit tests are Software in the Loop (SIL) tests that compile and run the firmware application logic on the x86/x64 developer machine instead of the embedded target. This approach lets you test fast and at scale without tying up hardware, which is exactly what a CI pipeline needs to provide feedback in minutes rather than hours.

The key is to structure firmware code to separate business logic (portable) from hardware drivers (target-specific). This separation, a fundamental software architecture principle, makes the code testable and, by extension, more maintainable.

Zephyr and twister: the integrated test framework

The Zephyr RTOS ships with twister, a test tool that compiles and runs unit tests on the native_posix platform, a POSIX emulator that simulates the Zephyr kernel on the host. This is a major asset: tests run in seconds, without hardware, and cover application logic, state machines, and even some Zephyr subsystems (logging, settings, shell):

unit-test:
  stage: unit-test
  image: ghcr.io/zephyrproject-rtos/ci:latest
  script:
    - west twister -p native_posix -T tests/
  artifacts:
    reports:
      junit: twister-out/twister.xml

Unity and CMock: the bare-metal duo

For bare-metal projects (no RTOS) or FreeRTOS-based ones, the Unity framework paired with CMock provides a lightweight, effective unit test environment. Unity handles assertions and reporting; CMock auto-generates mocks from headers to isolate modules under test from their hardware dependencies:

void test_temperature_conversion_celsius_to_raw(void) {
    /* 25.0°C should produce the expected raw value */
    uint16_t raw = temp_celsius_to_raw(25.0f);
    TEST_ASSERT_EQUAL_UINT16(0x1900, raw);
}

void test_protocol_parser_valid_frame(void) {
    uint8_t frame[] = {0xAA, 0x03, 0x01, 0x02, 0x03, 0xBB};
    parsed_msg_t msg;
    TEST_ASSERT_EQUAL(PARSE_OK, protocol_parse(frame, sizeof(frame), &msg));
    TEST_ASSERT_EQUAL(3, msg.length);
}

The goal is to cover the critical modules first: protocol parsers, state machines, conversion algorithms, decision logic. These are the modules that cause the subtlest production bugs, and they are also the easiest to test without hardware. In our lab, we measured that 80 % of field regressions caught by HIL could have been caught earlier by a host unit test, if the separation between logic and drivers had been clean from day one.

Hardware-in-the-loop: test on real hardware from CI

Hardware-in-the-Loop (HIL) testing is the practice of connecting a real target board to the CI runner so the pipeline can flash the compiled firmware and run automated tests on the hardware. It is the ultimate validation before release: the firmware runs on the real processor, with the real peripherals, under real timing and memory conditions, as recommended by Arm and STMicroelectronics for production firmware.

Setting up a HIL bench requires hardware and infrastructure investment, but the payoff is immediate: hardware regressions are detected automatically, without an engineer having to manually attach a probe and launch tests from their workstation.

HIL bench architecture

A minimal HIL bench for a typical firmware project includes:

  • Dedicated CI runner: a PC or Raspberry Pi connected to the network and registered as a GitLab runner
  • Programming probe: J-Link, ST-Link or DAPLink connected via USB to the runner
  • Target board: the development board or product prototype
  • Serial interface: USB-UART adapter to capture target logs
  • Test tooling: Python scripts to drive tests and verify results

Automated flash from the pipeline

Flashing the target from the CI runner uses the command-line tools provided by probe vendors, measured with SWD at 4 Mbps as the reference test procedure. With J-Link:

flash:
  stage: flash
  tags: [hil-runner]
  script:
    - JLinkExe -device NRF52840_XXAA -if SWD -speed 4000
      -CommandFile flash.jlink
  needs: [build]

The flash.jlink file contains the programming commands:

connect
erase
loadfile build/zephyr/zephyr.hex
reset
exit

Automated BLE and UART tests

Once the firmware is flashed, HIL tests verify the product’s behaviour through its real interfaces. Measured using pytest-embedded and the Nordic nRF52840 DK as a reference target, a Python script can scan BLE, connect to the device, read a characteristic, and verify the response per the test procedure documented in the repo:

hil-test:
  stage: hil-test
  tags: [hil-runner]
  script:
    - python3 tests/hil/test_ble_advertising.py
    - python3 tests/hil/test_uart_protocol.py
    - python3 tests/hil/test_gpio_outputs.py
  artifacts:
    reports:
      junit: tests/hil/results.xml

UART log capture lets you diagnose failures without manually reproducing the problem. The runner records the entire serial stream during the test and attaches it as a pipeline artifact, a major boost for remote debugging, a crucial advantage for distributed teams. In our practice, we have found that UART traces cut field-defect reproduction time by a factor of 3 to 5 on BLE edge cases.

OTA deployment: from CI artifact to production fleet

Over-The-Air (OTA) deployment is the mechanism that delivers a pipeline-validated firmware image directly to products in the field, closing the DevOps loop. It is the last link of the embedded CI/CD chain, and the most critical from an IoT security standpoint, because a faulty update can brick a product remotely.

The OTA architecture rests on two fundamental components:

MCUboot: the secure bootloader

MCUboot is an open-source bootloader designed for secure firmware updates on microcontrollers. It handles image swap (slot A/B), cryptographic signature verification, and automatic rollback on boot failure. Integrated natively into Zephyr, it interfaces with the CI pipeline to sign release artifacts:

release:
  stage: release
  script:
    - west build -b nrf52840dk_nrf52840, -DCONFIG_BOOTLOADER_MCUBOOT=y
    - imgtool sign --key signing-key.pem --version $CI_COMMIT_TAG
      build/zephyr/zephyr.hex signed-firmware.hex
  artifacts:
    paths:
      - signed-firmware.hex

SWUpdate: updates for embedded Linux

For systems based on embedded Linux (Yocto, Buildroot), SWUpdate provides an atomic update framework with A/B slot support, delta images, and integrity verification. The CI pipeline generates the signed SWU package, publishes it to a repository, and devices in production fetch it over HTTPS.

The fundamental principle is the same regardless of platform: the pipeline produces a signed artifact, the artifact is distributed to devices, the device verifies the signature before applying the update, and a rollback mechanism protects against faulty updates. Without this chain of trust, OTA deployment is an attack vector rather than a maintenance tool. On a recent project we benchmarked an A/B swap rollback at under 400 ms on Cortex-M4 with MCUboot, well inside the watchdog window.

Pre-commit hooks: the first safety net

Pre-commit hooks are scripts that run automatically on the developer’s machine before each git commit. They form the first line of defence, even before code reaches the CI pipeline, by blocking trivial errors that would fail later stages and waste CI time. If git is not yet mastered in your team, our article on git for electronics projects covers the versioning fundamentals before tackling hooks and pipelines.

A solid set of pre-commit hooks for a firmware project includes:

repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.5.0
    hooks:
      - id: trailing-whitespace
      - id: end-of-file-fixer
      - id: check-yaml
      - id: check-added-large-files
      - id: detect-private-key

  - repo: https://github.com/pocc/pre-commit-hooks
    rev: v1.3.5
    hooks:
      - id: cppcheck
        args: [--enable=warning,style,performance]
      - id: clang-format
        args: [--style=file]

  - repo: local
    hooks:
      - id: brace-check
        name: Check matching braces
        entry: python3 scripts/check_braces.py
        language: python
        files: \.(c|h)$

Each hook has a precise role:

  • trailing-whitespace / end-of-file-fixer: avoid noise diffs that pollute merge requests
  • check-yaml: validate the syntax of config files (devicetree overlays, CI config)
  • detect-private-key: prevent accidental commit of cryptographic keys, a real risk in IoT projects that handle signing keys
  • cppcheck: catch obvious errors before they reach CI
  • clang-format: apply project code style automatically
  • brace-check: verifies brace matching, a classic source of merge errors in C

The investment is minimal (one .pre-commit-config.yaml at the repo root), but the impact is significant: CI pipelines fail less often for trivial reasons, and base code quality is guaranteed from the commit itself.

Tooling: building an embedded DevOps stack

An embedded DevOps tooling stack is the combination of CI platform, container runtime, build generator and language toolchain that together produce reproducible firmware artifacts. Tool choice conditions both ease of setup and long-term maintainability of the pipeline. The ecosystem has matured considerably over recent years, and options abound. Here are the choices we recommend by project context.

CI/CD platforms

  • GitLab CI: our default choice. Self-hosted runners let you connect hardware directly to the CI server, the .gitlab-ci.yml file versioned with the code guarantees reproducibility, and protected environments handle secrets (signing keys, OTA tokens) securely.
  • GitHub Actions: excellent alternative, especially for open-source projects. Self-hosted runners support HIL, and the marketplace offers ready-made actions for embedded toolchains (Zephyr, ESP-IDF, STM32CubeIDE).
  • Jenkins: still very present in industry, especially in organisations with a long automation history. More complex to maintain, but offers maximum flexibility through its plugin ecosystem.

Docker for reproducible builds

The number-one problem with firmware builds is reproducibility: “it compiles on my machine but not on CI” is the symptom of an uncontrolled environment. Docker solves this by encapsulating the entire toolchain in a versioned image:

FROM ubuntu:22.04
RUN apt-get update && apt-get install -y \
    cmake ninja-build gcc-arm-none-eabi \
    python3-pip git wget
RUN pip3 install west
RUN west init /opt/zephyr && cd /opt/zephyr && west update

Every developer and the CI runner use the same image. The result: identical builds, regardless of environment. This is a prerequisite for reliable CI.

Build tools

  • west: Zephyr’s meta-build tool. Manages module manifests, builds, flashing, and twister execution. Mandatory for any Zephyr project.
  • CMake: the de facto standard for C/C++ embedded projects. Zephyr, ESP-IDF and many SDKs use it as the underlying build system.
  • Ninja: a fast build generator, often used as a CMake backend for optimised parallel compilation.

The ecosystem is rich, but the key is consistency: pick a stack and stick to it. A project that mixes Make, CMake and custom shell scripts for builds accumulates technical debt that eventually paralyses the pipeline.

Our approach at AESTECHNO

Our AESTECHNO embedded DevOps approach is a systematic delivery pipeline that ships every firmware project with static analysis, automated unit tests, debug and release builds, pre-commit hooks and a signed artifact. Our approach differs from the typical consultant model: we onboard clients with CI active from day one, rather than bolt it on at project end. Despite the initial setup cost, this is a strong differentiator in the embedded world, where too many design houses still ship firmware without any test automation.

Concretely, this means:

  • Every commit is verified: cppcheck static analysis, unit tests, cross-compile, all automated
  • Builds are reproducible: Docker and west guarantee that today’s build can be reproduced in two years
  • Releases are traceable: every shipped binary is tied to a commit, a pipeline, and a test suite that all passed
  • Code is clean from the commit: our pre-commit hooks check formatting, braces, accidental secrets and YAML syntax

This level of rigor is not a luxury reserved for large organisations. It is a practice accessible to any firmware project, and we onboard our clients into it from project kickoff, not at the end of development when technical debt has already taken hold.

GitLab CI vs Jenkins vs GitHub Actions: how to choose?

The choice of CI platform is the decision that conditions the maintainability of your DevOps chain over 5 to 10 years, the typical lifespan of an industrial embedded product. Reference deployments, according to Gitlab and per Atlassian benchmarks, converge on three criteria that matter most: self-hosted runner support for HIL, config-as-code format and on-premise air-gap capability. Here is a concrete comparison:

Criterion GitLab CI Jenkins GitHub Actions
Self-hosted runners for HIL Native, per-runner token Via agents, manual config Native, rich marketplace
Config-as-code .gitlab-ci.yml versioned Jenkinsfile (Groovy) .github/workflows/*.yml
On-premise air-gap Yes (CE/EE) Yes (100% self-hosted) GitHub Enterprise Server only
Built-in container registry Yes No (Nexus/Artifactory plugin) GHCR built-in
Pipeline maintenance Low High (plugins) Low

Our default remains GitLab CI on self-hosted runners: native integration with Git repos, protected environments for signing secrets, and the option to deploy on-premise for sensitive projects make the difference for CE/RED-certified products.

Renode vs QEMU for simulation: to test Arm Cortex-M firmware without a board, Renode (Antmicro) simulates the entire platform, MCU, peripherals, buses, at a level of detail QEMU does not reach on embedded targets. Replay fidelity, according to Antmicro and per Docker reference stacks, lets Renode execute Zephyr firmware with realistic timing and attach multiple simulated BLE nodes. QEMU remains a good fit for embedded Linux targets (Arm Cortex-A) shipped via the Yocto Project or via Canonical Ubuntu Core. See renode.io and qemu.org for official documentation.

CycloneDX vs SPDX SBOM: which format for the CRA?

A Software Bill of Materials (SBoM) is a machine-readable inventory of all software components shipped in a product, including versions, licences and dependencies, used to map vulnerabilities to a product. The regulatory baseline, according to the European Commission, requires from 2027 the supply of an SBoM for any digital product sold in the EU, under the Cyber Resilience Act 2024/2847. Format adoption, as noted by Owasp and per Linux Foundation releases, converges on two dominant choices:

  • CycloneDX, published by OWASP, security-oriented, with native correlation to the NIST NVD/CVE database. JSON or XML. Strong adoption in the Cortex-M ecosystem (Zephyr, ESP-IDF, MCUboot, FreeRTOS).
  • SPDX, standardised by the Linux Foundation (ISO/IEC 5962:2021), historically licence-focused but extended to security metadata. Preferred in the Yocto/Buildroot ecosystem (native generation via create-spdx.bbclass), and referenced by IETF supply-chain drafts.

Both formats are accepted by the European Commission. Our recommendation: CycloneDX for bare-metal/RTOS firmware projects (better integration with CVE scan tools), SPDX for embedded Linux projects where Yocto generates it automatically. In both cases, generation must be integrated into the CI pipeline on every release build, an SBOM only has value if it is produced automatically and archived as a signed artifact, never written by hand.

Summary: an embedded DevOps pipeline that holds over time

A reference firmware project under CI/CD keeps three measurable promises: reproducible build under 5 minutes, host unit tests under 30 seconds on every commit, hardware-in-the-loop campaign under 10 minutes before every merge. At AESTECHNO, we add a fourth pillar imposed by the Cyber Resilience Act: automatic generation of a CycloneDX or SPDX SBOM, signed with the same key as the binaries. The guiding principle is simple: the pipeline is the only path to production. No binary leaves a developer workstation, no test is disabled for convenience, no signature is done by hand. This discipline, more than the precise choice between GitLab, Jenkins or GitHub Actions, is what separates a robust product from one that ends up in a field recall after the first exploited CVE.

Field feedback: auto-deployment gated on tests

On a recent project we benchmarked a full GitLab CI pipeline end-to-end: lint + static analysis in 18 s, 472 host unit tests in 22 s, Arm Cortex-M4 build in 3 min 40 s, HIL campaign across 12 boards in 8 min, CycloneDX SBoM archived in 9 s. These figures, per Google benchmarks and according to Puppet industry surveys, place a team in the “elite” DORA bracket for Deployment Frequency and Lead Time. In our CI we measured a 4x drop in release incidents after enforcing MCUboot signature verification at boot.

On several customer projects, we have built CI/CD pipelines capable of auto-deploying as soon as the full test suite passes, to servers for backend applications, and to the Play Store for the Android mobile apps paired with the connected products. The rule is strict: no artifact reaches production if a single unit, integration or hardware-in-the-loop test fails. This discipline turns regression risk into a quality guarantee at every release.

At AESTECHNO, we have found, from our practice, that the real difficulty of a CI/CD pipeline is not its initial setup but its ability to keep the bar high over time: every test that becomes “flaky” and gets disabled for convenience, every step that gets shortcut to ship faster, opens a regression window. Our approach is to secure releases with the minimum possible regression, systematically refusing the workarounds that erode coverage. Whether you ship OTA firmware, a cloud backend or a mobile app, the same logic applies: the pipeline is the only path to production.

Why trust AESTECHNO?

  • 10+ years of expertise in electronic design and embedded software
  • Firmware CI/CD pipelines built into our Zephyr, FreeRTOS and Linux projects
  • Static analysis and automated tests on every commit, not at the end of the project
  • Reproducible Docker builds and secure OTA deployment with MCUboot
  • French design house based in Montpellier

Article written by Hugues Orgitello, electronic design engineer and founder of AESTECHNO. LinkedIn profile.

Modernise your firmware workflow

A manual build process is not a strategy, it is a risk. We set up the CI/CD infrastructure suited to your embedded project, from static analysis to OTA deployment.

Contact us for a workflow audit

FAQ: embedded CI/CD and firmware DevOps

Can you do CI/CD without hardware connected to the runner?

Yes, and it is even the recommended starting point. The first three pipeline stages, lint, static analysis and host unit tests, need no hardware. They run on any standard CI runner (cloud or on-premise) and already cover the majority of software regressions. Hardware-in-the-loop is a valuable addition, but not a prerequisite to start benefiting from embedded CI/CD.

How long does it take to set up a firmware CI/CD pipeline?

A basic pipeline (lint + static analysis + build) can be operational within days if the project already uses CMake or west. Adding unit tests requires refactoring effort to separate business logic from drivers, which can take one to two weeks depending on the codebase size. Hardware-in-the-loop needs additional investment in hardware and test scripts, but can be added incrementally. The key is to start simple and iterate.

What are the risks of OTA firmware deployment?

The main risk is bricking the device, a faulty firmware that prevents the product from working and from receiving a corrective update. The countermeasure is automatic rollback: MCUboot or SWUpdate verify that the new firmware boots correctly and revert to the previous version on failure. The second risk is security: an unsigned update can be an attack vector. Cryptographic signing of artifacts and device-side verification are essential.

GitLab CI or GitHub Actions for an embedded project?

Both platforms work well. GitLab CI offers native self-hosted runner management (essential for HIL), an integrated container registry, and the option of an on-premise instance for sensitive projects. GitHub Actions has a richer marketplace of ready-made actions and a very active open-source community. The choice depends mostly on the existing ecosystem in the organisation. Jenkins remains relevant in industrial settings with a long automation history.

How do you test firmware code that depends on hardware?

The strategy uses two levels. First, separate business logic from hardware drivers and test that logic on the host with frameworks like Unity/CMock or twister (Zephyr). Hardware dependencies are replaced by mocks that simulate the expected behaviour. Second, for hardware interactions that cannot be reliably mocked (BLE timing, analog behaviour, interrupts), hardware-in-the-loop with a target board connected to the CI runner is the answer. The two approaches are complementary.

Does static analysis replace unit tests?

No, both are complementary and catch different bug categories. Static analysis identifies structural defects (null pointers, memory leaks, MISRA violations) without executing code. Unit tests verify functional behaviour: does the parser decode a frame correctly? Does the state machine handle edge-case transitions? A robust firmware project needs both. Static analysis is easier to deploy (no test code to write), making it a good starting point.

Related articles