CLP Modern Framework Deploy & CI/CD Integration

08/12/2024 08/12/2024 devops 6 mins read
Table Of Contents

Advanced Deployment Guide: Modern Framework Deployment & CI/CD Integration #

Part 1: Advanced Framework Deployments

Astro Deployment

Let’s deploy an Astro application with server-side rendering (SSR) and dynamic routes.

  1. First, create a Node.js site for Astro:
Terminal window
clpctl site:add:nodejs \
--domainName=astro.yourdomain.com \
--nodejsVersion=18 \
--appPort=4321 \
--siteUser=astro-admin \
--siteUserPassword='secure_password_here'
  1. Set up the Astro project with SSR:
Terminal window
cd htdocs/astro.yourdomain.com/
# Create new Astro project
npm create astro@latest
# Add SSR adapter
npm install @astrojs/node
  1. Configure Astro for SSR (astro.config.mjs):
import { defineConfig } from 'astro/config';
import node from '@astrojs/node';
export default defineConfig({
output: 'server',
adapter: node({
mode: 'standalone'
}),
server: {
port: 4321,
host: true
}
});
  1. Create PM2 configuration for Astro:
ecosystem.config.js
module.exports = {
apps: [{
name: 'astro-app',
script: './dist/server/entry.mjs',
env: {
HOST: '127.0.0.1',
PORT: 4321,
NODE_ENV: 'production'
},
exec_mode: 'cluster',
instances: 'max',
max_memory_restart: '500M'
}]
}

Next.js Advanced Deployment

Let’s deploy a Next.js application with custom server configuration and internationalization.

  1. Set up the site:
Terminal window
clpctl site:add:nodejs \
--domainName=next.yourdomain.com \
--nodejsVersion=18 \
--appPort=3000 \
--siteUser=next-admin \
--siteUserPassword='secure_password_here'
  1. Configure Next.js with custom server:
server.js
const { createServer } = require('http');
const { parse } = require('url');
const next = require('next');
const dev = process.env.NODE_ENV !== 'production';
const app = next({ dev });
const handle = app.getRequestHandler();
app.prepare().then(() => {
createServer((req, res) => {
const parsedUrl = parse(req.url, true);
handle(req, res, parsedUrl);
}).listen(3000, (err) => {
if (err) throw err;
console.log('> Ready on http://localhost:3000');
});
});
  1. Advanced PM2 configuration for Next.js:
ecosystem.config.js
module.exports = {
apps: [{
name: 'nextjs-advanced',
script: 'server.js',
instances: 'max',
exec_mode: 'cluster',
watch: false,
max_memory_restart: '1G',
env: {
NODE_ENV: 'production',
PORT: 3000
},
env_production: {
NODE_ENV: 'production',
PORT: 3000
}
}]
}

Docusaurus Deployment

Let’s deploy a Docusaurus site with versioned documentation and search functionality.

  1. Create a static site for Docusaurus:
Terminal window
clpctl site:add:static \
--domainName=docs.yourdomain.com \
--siteUser=docs-admin \
--siteUserPassword='secure_password_here'
  1. Set up Docusaurus with versioning:
Terminal window
cd htdocs/docs.yourdomain.com/
npm init docusaurus@latest
npm run build
  1. Configure versioning:
docusaurus.config.js
module.exports = {
title: 'Your Documentation',
tagline: 'Documentation made easy',
url: 'https://docs.yourdomain.com',
baseUrl: '/',
onBrokenLinks: 'throw',
onBrokenMarkdownLinks: 'warn',
favicon: 'img/favicon.ico',
organizationName: 'your-org',
projectName: 'your-docs',
presets: [
[
'@docusaurus/preset-classic',
{
docs: {
sidebarPath: require.resolve('./sidebars.js'),
editUrl: 'https://github.com/your-org/your-docs/edit/main/',
showLastUpdateTime: true,
showLastUpdateAuthor: true,
},
},
],
],
};

Part 2: Advanced PM2 Management

Load Balancing with PM2

  1. Cluster Mode Configuration:
ecosystem.config.js
module.exports = {
apps: [{
name: 'load-balanced-app',
script: 'server.js',
instances: -1, // Use all available CPUs
exec_mode: 'cluster',
watch: true,
max_memory_restart: '1G',
env: {
NODE_ENV: 'production',
PORT: 3000
},
node_args: '--max_old_space_size=4096'
}]
}
  1. Advanced PM2 Monitoring:
Terminal window
# Monitor memory usage
pm2 monit
# Set up PM2 monitoring dashboard
pm2 plus
# Advanced logs with timestamp
pm2 logs --timestamp --lines 1000
# Rotate logs
pm2 install pm2-logrotate
pm2 set pm2-logrotate:max_size 10M
pm2 set pm2-logrotate:retain 7
  1. Zero-Downtime Deployment:
Terminal window
# Reload without downtime
pm2 reload ecosystem.config.js --env production
# Rolling reload
pm2 reload ecosystem.config.js --env production --parallel 2
# Graceful reload
pm2 gracefulReload all
  1. Memory Management:
// ecosystem.config.js with memory management
module.exports = {
apps: [{
name: 'memory-managed-app',
script: 'server.js',
instances: 4,
exec_mode: 'cluster',
max_memory_restart: '1G',
node_args: '--max_old_space_size=4096',
increment_var: 'PORT',
env: {
NODE_ENV: 'production',
PORT: 3000
},
error_file: 'logs/err.log',
out_file: 'logs/out.log',
merge_logs: true,
exp_backoff_restart_delay: 100
}]
}

Part 3: GitHub CI/CD Integration

GitHub Actions Workflow for Node.js Application

  1. Basic Deployment Workflow:
.github/workflows/deploy.yml
name: Deploy to Production
on:
push:
branches: [ main ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Node.js
uses: actions/setup-node@v2
with:
node-version: '18'
- name: Install Dependencies
run: npm ci
- name: Build
run: npm run build
- name: Deploy to Server
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.SERVER_HOST }}
username: ${{ secrets.SERVER_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
cd ~/htdocs/your-domain.com
git pull origin main
npm ci
npm run build
pm2 reload ecosystem.config.js --env production
  1. Advanced Workflow with Testing and Staging:
name: CI/CD Pipeline
on:
push:
branches: [ main, staging ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Node.js
uses: actions/setup-node@v2
with:
node-version: '18'
- name: Install Dependencies
run: npm ci
- name: Run Tests
run: npm test
- name: Run Linting
run: npm run lint
deploy-staging:
needs: test
if: github.ref == 'refs/heads/staging'
runs-on: ubuntu-latest
steps:
- name: Deploy to Staging
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.STAGING_HOST }}
username: ${{ secrets.STAGING_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
cd ~/htdocs/staging.your-domain.com
git pull origin staging
npm ci
npm run build
pm2 reload ecosystem.config.js --env staging
deploy-production:
needs: [test]
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- name: Deploy to Production
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.PROD_HOST }}
username: ${{ secrets.PROD_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
cd ~/htdocs/your-domain.com
git pull origin main
npm ci
npm run build
pm2 reload ecosystem.config.js --env production
  1. Automated Backup Before Deployment:
# Add this step before deployment
- name: Backup Database
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.SERVER_HOST }}
username: ${{ secrets.SERVER_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
clpctl db:backup --retentionPeriod=7
clpctl remote-backup:create

Security Best Practices

  1. Environment Variables:
Terminal window
# On your CloudPanel server
nano ~/.env.production
# Add your environment variables
DATABASE_URL=postgresql://user:pass@localhost:5432/db
REDIS_URL=redis://localhost:6379
JWT_SECRET=your-secret-key
# Update PM2 config to use env file
pm2 ecosystem.config.js --env production --env-path ~/.env.production
  1. SSL Certificate Automation:
# Add to GitHub Actions workflow
- name: Update SSL Certificate
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.SERVER_HOST }}
username: ${{ secrets.SERVER_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
clpctl lets-encrypt:renew:certificates
  1. Monitoring Setup:
// ecosystem.config.js with monitoring
module.exports = {
apps: [{
name: 'monitored-app',
script: 'server.js',
instances: 'max',
exec_mode: 'cluster',
max_memory_restart: '1G',
merge_logs: true,
log_date_format: 'YYYY-MM-DD HH:mm:ss Z',
error_file: 'logs/err.log',
out_file: 'logs/out.log',
log_type: 'json',
watch: ['server', 'client'],
ignore_watch: ['node_modules', 'logs']
}]
}

Remember to always test your CI/CD pipeline in a staging environment first and maintain proper backup strategies before deploying to production. Regular monitoring and logging are crucial for maintaining a healthy production environment.