Skip to content

Electron Vite Project: Setup, Core Concepts, and Build Process

This guide walks through the process of setting up a new Electron project using the official Vite template, explains the core architectural difference from web development (process separation and IPC), details the subsequent build (make) command and its output, and covers release automation with GitHub Actions.

  1. Creating the Project with Vite Template

    You initiated the project using create-electron-app with the Vite template:

    Terminal window
    npx create-electron-app@latest my-app --template=vite
    • npx create-electron-app@latest: Executes the latest version of the official Electron scaffolding tool.
    • my-app: The name (and directory) for your new project.
    • --template=vite: This crucial flag instructs the tool to set up:
      • Vite Integration: Configures Vite for fast development (HMR) and optimized production builds.
      • Specific File Structure: Creates src/main.js, src/preload.js, src/renderer.js, index.html, etc.
      • Vite Configuration Files: Generates vite.main.config.mjs, vite.preload.config.mjs, vite.renderer.config.mjs.
      • Electron Forge Configuration: Sets up forge.config.js including the @electron-forge/plugin-vite to bridge Electron Forge and Vite.
      • package.json: Defines dependencies (Electron, Vite, Forge plugins) and scripts (start, package, make).
  2. Understanding Process Separation and IPC (Key Difference from Web Dev)

    Unlike standard web development where your JavaScript runs in a sandboxed browser environment with limited access to the user’s system, Electron applications have two distinct types of processes:

    • Main Process:

      • Runs the script specified as main in your package.json (typically src/main.js or similar).
      • Has full Node.js access. It can interact with the operating system, access the file system, manage application windows (BrowserWindow), handle application lifecycle events, etc.
      • Think of it as the backend or the orchestrator of your application.
    • Renderer Process:

      • Each BrowserWindow instance runs its own independent renderer process.
      • Responsible for rendering web content (HTML, CSS, JavaScript – typically loaded from an index.html file and running scripts like src/renderer.js).
      • By default, and for crucial security reasons, the renderer process does not have direct access to Node.js APIs or sensitive system resources. Allowing web content (which could potentially load external or untrusted scripts) direct access to the file system or other OS features would be a major security vulnerability.

    The Bridge: Preload Scripts and IPC

    How does your user interface (renderer process) interact with the powerful capabilities of the main process (like saving a file)? The answer is Inter-Process Communication (IPC) facilitated by a special preload script.

    • Preload Script (src/preload.js):

      • This script is specified in your BrowserWindow configuration (webPreferences.preload).
      • It runs in the renderer process’s context but executes before the web page content (index.html, renderer.js) starts loading.
      • Crucially, the preload script has access to Node.js APIs (like require) and can interact with the renderer’s window and document objects.
      • Its primary role is to act as a secure bridge. It selectively exposes specific, controlled functionalities from the main process to the renderer process.
    • Inter-Process Communication (IPC):

      • Electron provides modules (ipcMain and ipcRenderer) to send messages between the main and renderer processes.
      • Pattern:
        1. The renderer process (via a function exposed by the preload script) sends a message requesting an action (ipcRenderer.send or ipcRenderer.invoke).
        2. The main process listens for specific message channels (ipcMain.on or ipcMain.handle).
        3. When a message arrives, the main process performs the requested action (e.g., read a file, show a dialog).
        4. If necessary, the main process sends a result back to the renderer (event.reply for on, or directly returning a value/Promise for handle).
    • Secure Exposure with contextBridge:

      • Instead of modifying the global window object directly in the preload script (which can be insecure), the recommended approach is to use the contextBridge module.
      • contextBridge.exposeInMainWorld('apiKey', { functionName: (...) => ... }); safely exposes your chosen functions (functionName) under a specific key (window.apiKey) in the renderer process. The renderer script (renderer.js) can then call window.apiKey.functionName().
      • This ensures only the APIs you explicitly define are available, maintaining the sandbox integrity of the renderer process.

    Example Flow:

    // src/main.js (Main Process)
    const { app, BrowserWindow, ipcMain } = require('electron');
    const path = require('node:path');
    // ... other setup ...
    function createWindow() {
    const mainWindow = new BrowserWindow({
    webPreferences: {
    preload: path.join(__dirname, 'preload.js'), // Load the preload script
    // Defaults: contextIsolation: true, nodeIntegration: false (Secure settings)
    },
    });
    // ... load index.html ...
    }
    // Listen for a message from the renderer asking to do something
    ipcMain.handle('perform-action', async (event, someArgument) => {
    console.log('Main process received:', someArgument);
    // Perform some Node.js action here (e.g., fs.readFile)
    const result = `Action performed with ${someArgument}`;
    return result; // Send result back to renderer
    });
    // ... app ready, etc. ...
    // src/preload.js (Preload Script - The Bridge)
    const { contextBridge, ipcRenderer } = require('electron');
    contextBridge.exposeInMainWorld('electronAPI', {
    // Expose a function called 'doAction' to the renderer
    doAction: (data) => ipcRenderer.invoke('perform-action', data)
    });
    console.log('Preload script loaded and API exposed.');
    // src/renderer.js (Renderer Process - The UI Logic)
    // This script runs in the browser window (index.html)
    // It CANNOT directly use 'require' or Node.js modules here.
    async function handleButtonClick() {
    const dataToSend = 'some data from UI';
    console.log('Renderer sending:', dataToSend);
    // Call the function exposed by the preload script
    const result = await window.electronAPI.doAction(dataToSend);
    console.log('Renderer received result:', result);
    // Update the UI with the result
    document.getElementById('response').innerText = result;
    }
    // Example: Assume you have a button with id="action-button" in your HTML
    const button = document.getElementById('action-button');
    if (button) {
    button.addEventListener('click', handleButtonClick);
    } else {
    console.error('Button not found');
    // Fallback or wait for DOM content to load
    window.addEventListener('DOMContentLoaded', () => {
    document.getElementById('action-button').addEventListener('click', handleButtonClick);
    // Also good place for initial UI setup that needs the exposed API
    // const initialData = await window.electronAPI.getInitialData();
    });
    }
    // You might also have UI framework code here (React, Vue, etc.)

    This separation and controlled communication via preload scripts are fundamental to building secure and robust Electron applications, representing a significant difference from typical browser-based web development.

  3. Building the Application

    You then ran the build command:

    Terminal window
    npm run make
    • This executes the make script defined in your package.json, which points to electron-forge make.
    • Electron Forge (electron-forge make) orchestrates the build process:
      1. Reads forge.config.js: Determines how to build and package.
      2. Uses @electron-forge/plugin-vite: Instructs Vite to build and bundle your main, preload, and renderer code based on the vite.*.config.mjs files. Output typically goes to .vite/build/.
      3. Creates Distributables (makers): Generates platform-specific packages based on the makers array in forge.config.js:
        • @electron-forge/maker-squirrel: Creates a Windows installer (Setup.exe) using Squirrel.Windows.
        • @electron-forge/maker-zip: Creates a .zip archive (configured for macOS).
        • @electron-forge/maker-deb: Creates a .deb package (for Debian/Ubuntu Linux).
        • @electron-forge/maker-rpm: Creates an .rpm package (for Fedora/RHEL Linux).
      4. Output: Places the final packages in the out directory.
  4. Understanding the Default Windows Installer Behavior (.exe)

    When you ran the .exe generated by the default configuration, you observed:

    • No Installation Wizard: It installed directly without typical steps (Welcome, License, Path Selection).
    • Quick Installation: The process was very fast.
    • Immediate Launch: The application opened right after installation.

    This behavior is characteristic of Squirrel.Windows, the technology used by @electron-forge/maker-squirrel:

    • Design Philosophy: Prioritizes simplicity, speed, and seamless background updates over complex installation options.
    • No Wizard by Default: Intentionally avoids a multi-step wizard.
    • User-Specific Install: Typically installs to the user’s %LocalAppData% folder, often not requiring admin rights.
  5. Optional: Switching to a Traditional Windows Installer (NSIS)

    If you prefer the conventional Windows installer experience with a setup wizard (License agreement, installation path selection, etc.), you need to use a different maker, such as @electron-forge/maker-nsis.

    1. Install the NSIS maker:

      Terminal window
      npm install --save-dev @electron-forge/maker-nsis
      # or
      # yarn add --dev @electron-forge/maker-nsis
    2. Modify forge.config.js: Update the makers array to use @electron-forge/maker-nsis instead of (or in addition to) @electron-forge/maker-squirrel.

      forge.config.js
      module.exports = {
      // ... other config ...
      makers: [
      {
      name: '@electron-forge/maker-nsis',
      config: {
      // Add NSIS-specific options here if needed
      // e.g., setupIcon: './path/to/icon.ico'
      },
      },
      // If you want Squirrel alongside NSIS (uncommon but possible)
      // { name: '@electron-forge/maker-squirrel', config: {} },
      // Keep other makers like zip, deb, rpm as needed
      { name: '@electron-forge/maker-zip', platforms: ['darwin'] },
      { name: '@electron-forge/maker-deb', config: {} },
      { name: '@electron-forge/maker-rpm', config: {} },
      ],
      // ... plugins ...
      };
    3. Re-run the build:

      Terminal window
      npm run make
      ``` This will now generate an NSIS-based `.exe` installer (likely named something like `YourApp Setup Version.exe`) in the `out/make/nsis` directory, providing the familiar wizard interface.
  6. Automating Releases with GitHub Actions

    Initially, publishing was done locally using npm run publish. However, this proved to be very slow due to the large size of the distributable files (.exe, .zip, .deb, etc.) being uploaded over your local internet connection. Electron apps bundle the Chromium runtime, leading to package sizes often exceeding 100MB per platform.

    To address this bottleneck and automate the release process, we implemented a GitHub Actions workflow.

    Why GitHub Actions?

    • Speed: GitHub Actions run on servers with very fast internet connections, dramatically reducing the time spent uploading large artifact files.
    • Automation: The entire process of building for multiple platforms and publishing to GitHub Releases is triggered automatically when you push a specific Git tag (e.g., v1.2.3).
    • Consistency: Ensures the build and release process is the same every time, reducing errors caused by local environment differences.
    • Free for Public Repos: GitHub Actions provides generous free tiers, especially for open-source projects.

    Setting up GitHub Token for Publishing

    Before running npm run publish, set your GitHub token as an environment variable based on your operating system:

    Terminal window
    # Windows (Command Prompt)
    set GITHUB_TOKEN=YOUR_COPIED_TOKEN_HERE
    # Windows (PowerShell)
    $env:GITHUB_TOKEN="YOUR_COPIED_TOKEN_HERE"
    # macOS / Linux (Bash, Zsh)
    export GITHUB_TOKEN="YOUR_COPIED_TOKEN_HERE"
    .github/workflows/release.yml
    name: Release Electron App
    on:
    # Triggers the workflow on pushes that create a tag matching the pattern v*.*.* (e.g., v1.0.0, v1.2.3)
    push:
    tags:
    - 'v*.*.*'
    jobs:
    # Job to build for Windows x64
    build-windows-x64:
    name: Build Windows (x64)
    runs-on: windows-latest # Native x64 runner
    steps:
    - name: Checkout Repository
    uses: actions/checkout@v4
    with:
    fetch-depth: 0
    - name: Setup Node.js
    uses: actions/setup-node@v4
    with:
    node-version: '20'
    cache: 'npm'
    - name: Install Dependencies
    # npm ci will download win32-x64 ffmpeg via ffmpeg-static
    run: npm ci
    - name: Build Windows x64 (Make)
    # Electron Forge should auto-detect win32/x64 here, or use explicit flags
    run: npm run make # Optional: -- --platform=win32 --arch=x64
    - name: Upload Windows x64 Artifact
    uses: actions/upload-artifact@v4
    with:
    name: windows-x64-artifact
    path: out/make/**/* # Upload squirrel/zip output
    # Job to build for Windows ARM64
    build-windows-arm64:
    name: Build Windows (ARM64)
    # --- USE NATIVE ARM64 RUNNER ---
    runs-on: windows-11-arm
    steps:
    - name: Checkout Repository
    uses: actions/checkout@v4
    with:
    fetch-depth: 0
    - name: Setup Node.js
    uses: actions/setup-node@v4
    with:
    node-version: '20'
    cache: 'npm'
    - name: Install Dependencies
    # npm ci will download win32-arm64 ffmpeg via ffmpeg-static
    run: npm ci
    - name: Build Windows arm64 (Make)
    # Electron Forge should auto-detect win32/arm64 here, or use explicit flags
    run: npm run make # Optional: -- --platform=win32 --arch=arm64
    - name: Upload Windows arm64 Artifact
    uses: actions/upload-artifact@v4
    with:
    name: windows-arm64-artifact
    path: out/make/**/* # Upload squirrel/zip output
    # Job to build for Linux x64
    build-linux-x64:
    name: Build Linux (x64)
    runs-on: ubuntu-latest # Native x64 runner
    steps:
    - name: Checkout Repository
    uses: actions/checkout@v4
    with:
    fetch-depth: 0
    - name: Setup Node.js
    uses: actions/setup-node@v4
    with:
    node-version: '20'
    cache: 'npm'
    # No extra dependencies needed for native x64 build usually
    # - name: Install Linux Build Dependencies ...
    - name: Install Project Dependencies
    # npm ci will download linux-x64 ffmpeg via ffmpeg-static
    run: npm ci
    - name: Build Linux x64 (Make)
    # Electron Forge should auto-detect linux/x64 here, or use explicit flags
    run: npm run make # Optional: -- --platform=linux --arch=x64
    - name: Upload Linux x64 Artifact
    uses: actions/upload-artifact@v4
    with:
    # --- UNIQUE ARTIFACT NAME ---
    name: linux-x64-artifact
    path: out/make/**/* # Upload deb/rpm/etc. output
    # Job to build for Linux arm64
    build-linux-arm64:
    name: Build Linux (ARM64)
    # --- USE NATIVE ARM64 RUNNER ---
    runs-on: ubuntu-22.04-arm
    steps:
    - name: Checkout Repository
    uses: actions/checkout@v4
    with:
    fetch-depth: 0
    - name: Setup Node.js
    uses: actions/setup-node@v4
    with:
    node-version: '20'
    cache: 'npm'
    # No extra dependencies needed for native arm64 build usually
    # - name: Install Linux Build Dependencies ...
    - name: Install Project Dependencies
    # npm ci will download linux-arm64 ffmpeg via ffmpeg-static
    run: npm ci
    - name: Build Linux arm64 (Make)
    # Electron Forge should auto-detect linux/arm64 here, or use explicit flags
    run: npm run make # Optional: -- --platform=linux --arch=arm64
    - name: Upload Linux arm64 Artifact
    uses: actions/upload-artifact@v4
    with:
    # --- UNIQUE ARTIFACT NAME ---
    name: linux-arm64-artifact
    path: out/make/**/* # Upload deb/rpm/etc. output
    # Job to create the GitHub Release and upload all built artifacts
    release:
    name: Create GitHub Release
    runs-on: ubuntu-latest
    # --- UPDATE DEPENDENCIES TO INCLUDE ALL 4 BUILD JOBS ---
    needs: [build-windows-x64, build-windows-arm64, build-linux-x64, build-linux-arm64]
    permissions:
    contents: write # Needed to create releases and upload assets
    steps:
    - name: Checkout Repository # Needed to get tag info
    uses: actions/checkout@v4
    # Create a directory to download all artifacts into
    - name: Create Staging Directory
    run: mkdir staging
    # --- UPDATE ARTIFACT DOWNLOADS ---
    - name: Download Windows x64 Artifact
    uses: actions/download-artifact@v4
    with:
    name: windows-x64-artifact
    path: staging/windows-x64
    - name: Download Windows arm64 Artifact
    uses: actions/download-artifact@v4
    with:
    name: windows-arm64-artifact
    path: staging/windows-arm64
    - name: Download Linux x64 Artifact
    uses: actions/download-artifact@v4
    with:
    name: linux-x64-artifact
    path: staging/linux-x64 # Store in separate sub-directory
    - name: Download Linux arm64 Artifact
    uses: actions/download-artifact@v4
    with:
    name: linux-arm64-artifact
    path: staging/linux-arm64 # Store in separate sub-directory
    # --- RENAMING STEPS (Verify working-directory paths based on actual output) ---
    # These assume default Forge output structure like out/make/squirrel.windows/x64/*.exe etc.
    # Adjust paths if your makers produce different structures.
    - name: Rename Windows x64 Artifacts for Uniqueness
    # !! Verify this path matches the actual downloaded structure !!
    working-directory: staging/windows-x64/squirrel.windows/x64
    run: |
    echo "--- Files before renaming in $(pwd) ---"; ls -l
    for FILE in *; do
    if [[ -f "$FILE" ]]; then
    FILENAME="${FILE%.*}"; EXTENSION="${FILE##*.}"
    if [[ "$FILENAME" == "$EXTENSION" ]] || [[ "$FILENAME" == "" ]]; then NEW_NAME="${FILE}-x64"; else NEW_NAME="${FILENAME}-x64.${EXTENSION}"; fi
    echo "Renaming '$FILE' to '$NEW_NAME'"; mv "$FILE" "$NEW_NAME"
    fi
    done
    echo "--- Files after renaming in $(pwd) ---"; ls -l
    # Continue even if no files are found (e.g., if maker changes)
    continue-on-error: true
    - name: Rename Windows arm64 Artifacts for Uniqueness
    # !! Verify this path matches the actual downloaded structure !!
    working-directory: staging/windows-arm64/squirrel.windows/arm64
    run: |
    echo "--- Files before renaming in $(pwd) ---"; ls -l
    for FILE in *; do
    if [[ -f "$FILE" ]]; then
    FILENAME="${FILE%.*}"; EXTENSION="${FILE##*.}"
    if [[ "$FILENAME" == "$EXTENSION" ]] || [[ "$FILENAME" == "" ]]; then NEW_NAME="${FILE}-arm64"; else NEW_NAME="${FILENAME}-arm64.${EXTENSION}"; fi
    echo "Renaming '$FILE' to '$NEW_NAME'"; mv "$FILE" "$NEW_NAME"
    fi
    done
    echo "--- Files after renaming in $(pwd) ---"; ls -l
    # Continue even if no files are found
    continue-on-error: true
    # Optional: Rename Linux artifacts if needed (e.g., add arch to .deb)
    - name: Rename Linux x64 Artifacts for Uniqueness
    # !! Verify this path matches the actual downloaded structure (e.g., deb/x64 or similar) !!
    working-directory: staging/linux-x64/deb/x64 # Example path for deb maker
    run: |
    echo "--- Files before renaming in $(pwd) ---"; ls -l
    for FILE in *.deb; do # Adjust wildcard if using rpm etc.
    if [[ -f "$FILE" ]] && [[ ! "$FILE" =~ -x64\.deb$ ]]; then # Avoid double-renaming
    NEW_NAME="${FILE%.deb}-x64.deb"
    echo "Renaming '$FILE' to '$NEW_NAME'"; mv "$FILE" "$NEW_NAME"
    fi
    done
    echo "--- Files after renaming in $(pwd) ---"; ls -l
    # Continue even if no files are found
    continue-on-error: true
    - name: Rename Linux arm64 Artifacts for Uniqueness
    # !! Verify this path matches the actual downloaded structure (e.g., deb/arm64 or similar) !!
    working-directory: staging/linux-arm64/deb/arm64 # Example path for deb maker
    run: |
    echo "--- Files before renaming in $(pwd) ---"; ls -l
    for FILE in *.deb; do # Adjust wildcard if using rpm etc.
    if [[ -f "$FILE" ]] && [[ ! "$FILE" =~ -arm64\.deb$ ]]; then # Avoid double-renaming
    NEW_NAME="${FILE%.deb}-arm64.deb"
    echo "Renaming '$FILE' to '$NEW_NAME'"; mv "$FILE" "$NEW_NAME"
    fi
    done
    echo "--- Files after renaming in $(pwd) ---"; ls -l
    # Continue even if no files are found
    continue-on-error: true
    # --- END OF RENAMING STEPS ---
    - name: List final staging files Tree
    run: ls -R staging
    # Use a dedicated action to create the release and upload all artifacts
    - name: Create GitHub Release and Upload Artifacts
    uses: softprops/action-gh-release@v2
    with:
    tag_name: ${{ github.ref_name }}
    name: Release ${{ github.ref_name }}
    # Glob pattern should find all files within the staging directory,
    # including those renamed within their subdirectories.
    files: staging/**/*
    env:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

    How the Workflow Works (.github/workflows/release.yml)

    The workflow consists of multiple jobs designed to build the application on different operating systems and then consolidate the results into a single GitHub Release:

    1. Trigger: The entire workflow is triggered only when a Git tag matching the pattern v*.*.* is pushed to the repository.
    2. Build Jobs (build-windows-x64, build-windows-arm64, build-linux):
      • These jobs run in parallel on different virtual machines (windows-latest or ubuntu-latest).
      • Each job checks out the code, sets up Node.js, installs dependencies (npm ci), and runs npm run make specifically targeting one platform and architecture (e.g., --platform=win32 --arch=x64).
      • Crucially, each build job uploads its resulting distributables (from the out/make directory) as a named artifact (e.g., windows-x64-artifact). Artifacts are temporary storage for files within a workflow run.
    3. Release Job (release):
      • This job waits for all the build jobs to complete successfully (needs: [...]).
      • It runs on an ubuntu-latest runner.
      • It downloads all the artifacts created by the build jobs into a staging directory.
      • Artifact Renaming (Optional but Recommended): Includes steps to rename artifacts (like Setup.exe, .nupkg, .deb) within their specific subdirectories (e.g., staging/windows-x64/squirrel.windows/x64) to add architecture suffixes (e.g., -x64, -arm64). This prevents naming collisions when uploading multiple platform versions of similarly named files (like Setup.exe) to the GitHub Release. Adjust the paths and logic based on the makers you are using (Squirrel, NSIS, Deb, etc.).
      • Create GitHub Release: It uses the softprops/action-gh-release action to:
        • Create a new GitHub Release associated with the triggering tag (${{ github.ref_name }}).
        • Upload all files found within the staging directory (including the renamed ones in their subdirectories) as assets to that release.
      • Authentication: It uses the automatically provided GITHUB_TOKEN secret (enabled via the permissions: contents: write setting) for authenticating with GitHub to create the release and upload assets. No manual Personal Access Token is needed in the workflow file itself.

    By using this GitHub Actions workflow, the time-consuming build and upload process is offloaded to GitHub’s infrastructure, resulting in a much faster and fully automated release pipeline for your Electron application whenever you push a new version tag.