CLP Modern Framework Deploy & CI/CD Integration
08/12/2024 08/12/2024 devops 6 mins read
Table Of Contents
- Advanced Deployment Guide: Modern Framework Deployment & CI/CD Integration
- Part 1: Advanced Framework Deployments
- Astro Deployment
- Next.js Advanced Deployment
- Docusaurus Deployment
- Part 2: Advanced PM2 Management
- Load Balancing with PM2
- Part 3: GitHub CI/CD Integration
- GitHub Actions Workflow for Node.js Application
- Security Best Practices
Advanced Deployment Guide: Modern Framework Deployment & CI/CD Integration #
Part 1: Advanced Framework Deployments
Astro Deployment
Let’s deploy an Astro application with server-side rendering (SSR) and dynamic routes.
- First, create a Node.js site for Astro:
clpctl site:add:nodejs \ --domainName=astro.yourdomain.com \ --nodejsVersion=18 \ --appPort=4321 \ --siteUser=astro-admin \ --siteUserPassword='secure_password_here'
- Set up the Astro project with SSR:
cd htdocs/astro.yourdomain.com/# Create new Astro projectnpm create astro@latest
# Add SSR adapternpm install @astrojs/node
- Configure Astro for SSR (astro.config.mjs):
import { defineConfig } from 'astro/config';import node from '@astrojs/node';
export default defineConfig({ output: 'server', adapter: node({ mode: 'standalone' }), server: { port: 4321, host: true }});
- Create PM2 configuration for Astro:
module.exports = { apps: [{ name: 'astro-app', script: './dist/server/entry.mjs', env: { HOST: '127.0.0.1', PORT: 4321, NODE_ENV: 'production' }, exec_mode: 'cluster', instances: 'max', max_memory_restart: '500M' }]}
Next.js Advanced Deployment
Let’s deploy a Next.js application with custom server configuration and internationalization.
- Set up the site:
clpctl site:add:nodejs \ --domainName=next.yourdomain.com \ --nodejsVersion=18 \ --appPort=3000 \ --siteUser=next-admin \ --siteUserPassword='secure_password_here'
- Configure Next.js with custom server:
const { createServer } = require('http');const { parse } = require('url');const next = require('next');
const dev = process.env.NODE_ENV !== 'production';const app = next({ dev });const handle = app.getRequestHandler();
app.prepare().then(() => { createServer((req, res) => { const parsedUrl = parse(req.url, true); handle(req, res, parsedUrl); }).listen(3000, (err) => { if (err) throw err; console.log('> Ready on http://localhost:3000'); });});
- Advanced PM2 configuration for Next.js:
module.exports = { apps: [{ name: 'nextjs-advanced', script: 'server.js', instances: 'max', exec_mode: 'cluster', watch: false, max_memory_restart: '1G', env: { NODE_ENV: 'production', PORT: 3000 }, env_production: { NODE_ENV: 'production', PORT: 3000 } }]}
Docusaurus Deployment
Let’s deploy a Docusaurus site with versioned documentation and search functionality.
- Create a static site for Docusaurus:
clpctl site:add:static \ --domainName=docs.yourdomain.com \ --siteUser=docs-admin \ --siteUserPassword='secure_password_here'
- Set up Docusaurus with versioning:
cd htdocs/docs.yourdomain.com/npm init docusaurus@latestnpm run build
- Configure versioning:
module.exports = { title: 'Your Documentation', tagline: 'Documentation made easy', url: 'https://docs.yourdomain.com', baseUrl: '/', onBrokenLinks: 'throw', onBrokenMarkdownLinks: 'warn', favicon: 'img/favicon.ico', organizationName: 'your-org', projectName: 'your-docs',
presets: [ [ '@docusaurus/preset-classic', { docs: { sidebarPath: require.resolve('./sidebars.js'), editUrl: 'https://github.com/your-org/your-docs/edit/main/', showLastUpdateTime: true, showLastUpdateAuthor: true, }, }, ], ],};
Part 2: Advanced PM2 Management
Load Balancing with PM2
- Cluster Mode Configuration:
module.exports = { apps: [{ name: 'load-balanced-app', script: 'server.js', instances: -1, // Use all available CPUs exec_mode: 'cluster', watch: true, max_memory_restart: '1G', env: { NODE_ENV: 'production', PORT: 3000 }, node_args: '--max_old_space_size=4096' }]}
- Advanced PM2 Monitoring:
# Monitor memory usagepm2 monit
# Set up PM2 monitoring dashboardpm2 plus
# Advanced logs with timestamppm2 logs --timestamp --lines 1000
# Rotate logspm2 install pm2-logrotatepm2 set pm2-logrotate:max_size 10Mpm2 set pm2-logrotate:retain 7
- Zero-Downtime Deployment:
# Reload without downtimepm2 reload ecosystem.config.js --env production
# Rolling reloadpm2 reload ecosystem.config.js --env production --parallel 2
# Graceful reloadpm2 gracefulReload all
- Memory Management:
// ecosystem.config.js with memory managementmodule.exports = { apps: [{ name: 'memory-managed-app', script: 'server.js', instances: 4, exec_mode: 'cluster', max_memory_restart: '1G', node_args: '--max_old_space_size=4096', increment_var: 'PORT', env: { NODE_ENV: 'production', PORT: 3000 }, error_file: 'logs/err.log', out_file: 'logs/out.log', merge_logs: true, exp_backoff_restart_delay: 100 }]}
Part 3: GitHub CI/CD Integration
GitHub Actions Workflow for Node.js Application
- Basic Deployment Workflow:
name: Deploy to Production
on: push: branches: [ main ]
jobs: deploy: runs-on: ubuntu-latest
steps: - uses: actions/checkout@v2
- name: Setup Node.js uses: actions/setup-node@v2 with: node-version: '18'
- name: Install Dependencies run: npm ci
- name: Build run: npm run build
- name: Deploy to Server uses: appleboy/ssh-action@master with: host: ${{ secrets.SERVER_HOST }} username: ${{ secrets.SERVER_USER }} key: ${{ secrets.SSH_PRIVATE_KEY }} script: | cd ~/htdocs/your-domain.com git pull origin main npm ci npm run build pm2 reload ecosystem.config.js --env production
- Advanced Workflow with Testing and Staging:
name: CI/CD Pipeline
on: push: branches: [ main, staging ] pull_request: branches: [ main ]
jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2
- name: Setup Node.js uses: actions/setup-node@v2 with: node-version: '18'
- name: Install Dependencies run: npm ci
- name: Run Tests run: npm test
- name: Run Linting run: npm run lint
deploy-staging: needs: test if: github.ref == 'refs/heads/staging' runs-on: ubuntu-latest
steps: - name: Deploy to Staging uses: appleboy/ssh-action@master with: host: ${{ secrets.STAGING_HOST }} username: ${{ secrets.STAGING_USER }} key: ${{ secrets.SSH_PRIVATE_KEY }} script: | cd ~/htdocs/staging.your-domain.com git pull origin staging npm ci npm run build pm2 reload ecosystem.config.js --env staging
deploy-production: needs: [test] if: github.ref == 'refs/heads/main' runs-on: ubuntu-latest
steps: - name: Deploy to Production uses: appleboy/ssh-action@master with: host: ${{ secrets.PROD_HOST }} username: ${{ secrets.PROD_USER }} key: ${{ secrets.SSH_PRIVATE_KEY }} script: | cd ~/htdocs/your-domain.com git pull origin main npm ci npm run build pm2 reload ecosystem.config.js --env production
- Automated Backup Before Deployment:
# Add this step before deployment- name: Backup Database uses: appleboy/ssh-action@master with: host: ${{ secrets.SERVER_HOST }} username: ${{ secrets.SERVER_USER }} key: ${{ secrets.SSH_PRIVATE_KEY }} script: | clpctl db:backup --retentionPeriod=7 clpctl remote-backup:create
Security Best Practices
- Environment Variables:
# On your CloudPanel servernano ~/.env.production
# Add your environment variablesDATABASE_URL=postgresql://user:pass@localhost:5432/dbREDIS_URL=redis://localhost:6379JWT_SECRET=your-secret-key
# Update PM2 config to use env filepm2 ecosystem.config.js --env production --env-path ~/.env.production
- SSL Certificate Automation:
# Add to GitHub Actions workflow- name: Update SSL Certificate uses: appleboy/ssh-action@master with: host: ${{ secrets.SERVER_HOST }} username: ${{ secrets.SERVER_USER }} key: ${{ secrets.SSH_PRIVATE_KEY }} script: | clpctl lets-encrypt:renew:certificates
- Monitoring Setup:
// ecosystem.config.js with monitoringmodule.exports = { apps: [{ name: 'monitored-app', script: 'server.js', instances: 'max', exec_mode: 'cluster', max_memory_restart: '1G', merge_logs: true, log_date_format: 'YYYY-MM-DD HH:mm:ss Z', error_file: 'logs/err.log', out_file: 'logs/out.log', log_type: 'json', watch: ['server', 'client'], ignore_watch: ['node_modules', 'logs'] }]}
Remember to always test your CI/CD pipeline in a staging environment first and maintain proper backup strategies before deploying to production. Regular monitoring and logging are crucial for maintaining a healthy production environment.