A practical, field-tested template for establishing a repeatable Performance baseline for Adobe ColdFusion and CFML applications. The ColdFusion Performance Baseline Workbook is an XLSX spreadsheet that helps you plan, run, measure, and compare load tests so you can tune servers, quantify improvements, and avoid regressions. It’s valuable for developers, SREs, and architects who want a consistent way to benchmark ColdFusion across environments, versions, or Configuration changes.
Overview
The workbook provides a structured workflow to capture your environment’s Configuration, define test plans, import metrics from tools like JMeter or k6, and visualize results using prebuilt charts and summaries. By standardizing how you measure throughput, latency, error rate, and resource usage, you create a trusted baseline that supports capacity planning, upgrade assessments (e.g., ColdFusion 2018 → 2023), and SLA tracking.
- Works with Adobe ColdFusion (2016–2023) and Lucee 5.x/6.x on Tomcat.
- Compatible with Java 11/17, Linux and Windows Server.
- Use with any HTTP load testing tool (e.g., JMeter, k6, Gatling) and APM (e.g., FusionReactor, Java Flight Recorder, Mission Control).
- Lightweight XLSX format opens in Microsoft Excel, LibreOffice, or Google Sheets.
What You’ll Get
The Download is a single XLSX file containing:
- ReadMe & Workflow: One-page guide on how to proceed through the workbook.
- Environment Inventory: Document OS, ColdFusion version/patch, Tomcat, JVM, hardware, and Deployment topology.
- CF Admin Settings: Record Request Tuning, Cache, Datasource pool sizes, and other key CFAdmin configurations.
- JVM & Tomcat: Track heap size, GC mode, thread pools, and connector settings (AJP/HTTP).
- Database & Datasources: Note JDBC options, prepared statement caching, pool sizes, and DB version.
- Test Plan Matrix: Define scenarios, target endpoints, traffic mix, data sets, and pass/fail NFRs.
- Load Run Log: Log each run with date, commit SHA/tag, changes applied, and notes.
- Metrics Import: Paste CSV exports (from JMeter/k6/APM). Predefined headers for timestamps, RPS, latency percentiles, errors, CPU, heap, GC, DB timings.
- Results Dashboard: Auto-updating charts for throughput, p95/p99 latency, error rate, and resource utilization.
- Percentile Summary: Aggregated stats (avg, p90, p95, p99) and Comparison against SLA thresholds.
- Error Analysis: Breakdowns by response code, sampler/endpoint, and failure message.
- Bottleneck Journal: Root-cause notes, hypotheses, and verification steps.
- Action Plan & RCA: A template for change proposals, expected impact, rollback criteria, and verification results.
- Change Log: Versioned timeline of configurations and code variations tested.
- Glossary: Definitions for terms like “baseline,” “steady state,” “coordinated omission,” “think time,” etc.
Also included inside the workbook:
- CSV header templates you can copy into your load tools.
- Embedded links to example JMeter/JFR export steps and k6 script snippets.
- Prebuilt Excel formulas for deltas, improvement percentages, and pass/fail highlighting.
Download and Requirements
- File type: XLSX (no macros).
- Size: Lightweight; suitable for email or Version control.
- Compatibility: Excel 2016+, Excel for Mac, LibreOffice, Google Sheets (import as Google Sheet if preferred).
How to get it:
- Use the “Download” button or link associated with this page to obtain the XLSX.
- If you don’t see the link, request access from your administrator or contact support to receive the file.
How to Use the Workbook
Step 1: Make a Working Copy
- Open the XLSX and save a project-specific copy (e.g., cf-baseline-clientA-2025Q1.xlsx).
- Set your timezone, units (ms, MB), and SLA thresholds in the ReadMe & Workflow and Percentile Summary sheets.
Step 2: Capture Your Environment
- Fill out Environment Inventory, CF Admin Settings, JVM & Tomcat, and Database & Datasources.
- Highlight any known constraints (e.g., shared DB instance, limited IOPS, container CPU limits).
Step 3: Define the Baseline Test Plan
- In Test Plan Matrix, select 3–7 representative endpoints (mix of read/write).
- Define traffic mix, payload sizes, think time, and ramp patterns.
- Specify pass/fail NFRs like “p95 <= 400 ms,” “Error rate < 1%,” “Throughput >= 500 RPS.”
Step 4: Prepare Monitoring and Observability
- Enable application monitoring: FusionReactor or CF Server monitor, plus system telemetry (CPU, memory, network, disk).
- Collect JVM signals: heap usage, GC pauses, thread counts (JFR/Mission Control or VisualVM).
- Validate that timestamps across tools are synchronized (NTP/PTP).
Step 5: Warm Up and Establish Steady State
- Run a warm-up test (5–15 minutes) to allow JIT compilation, caches, and pools to stabilize.
- Verify no unexpected errors in logs.
Step 6: Execute Load Tests
- Run stepped concurrency tests (e.g., 10, 25, 50, 100 VUs) or target RPS if your tool supports it.
- Example JMeter CLI (adjust for your setup):
- jmeter -n -t plan.jmx -Jusers=50 -Jduration=900 -l results_50.csv
- Always keep data sets realistic to avoid unrealistically hot caches.
Step 7: Export and Import Metrics
- From your load tool, export CSV with time, label, latency (avg, p90, p95, p99), throughput, errors.
- From APM/JFR, export CPU, heap used, GC pause, threads, DB time if available.
- Paste into the Metrics Import sheet following the provided headers and mapping notes.
Step 8: Review Charts and Summaries
- Open Results Dashboard and Percentile Summary to inspect p95/p99, throughput, and error rates.
- Compare against SLA and annotate insights in Load Run Log and Bottleneck Journal.
Step 9: Document Actions and Verify Improvements
- Use Action Plan & RCA to propose changes: e.g., increase Tomcat maxThreads, tune JDBC pool, add indexes.
- Re-run tests and record before/after metrics. The workbook highlights improvement deltas automatically.
Best practices
- Establish a single, immutable baseline per environment, then vary one factor at a time.
- Use realistic data and payloads; avoid synthetic minimal payloads that hide serialization and DB costs.
- Separate read/write tests; write-heavy scenarios stress DB, locks, and transaction settings.
- For ColdFusion:
- Right-size “Maximum number of simultaneous requests,” “Queue timeout,” and template cache settings.
- Verify JDBC settings: prepared statements, fetch size, connection pool size aligned to DB capacity.
- Profile cfthread usage and avoid unbounded thread creation.
- Measure impact of query caching and page caching; validate eviction policies.
- JVM:
- Start with G1GC for Java 11/17; size heap to avoid excessive GC or OOM risk.
- Monitor GC pause percentiles; p99 spikes often reveal heap pressure or allocation bursts.
- Tomcat connectors:
- Align maxThreads, acceptCount, and keepAlive settings with expected concurrency and upstream load balancer behavior.
- Data store:
- Index critical queries; track Slow queries and lock waits.
- Validate connection pool saturation and timeouts.
Supported Environments
- Application servers: Adobe ColdFusion 2016–2023; Lucee 5.x/6.x.
- Java: 11 and 17 commonly used; align with your CF distribution.
- OS: Windows Server, RHEL/CentOS/Rocky, Ubuntu.
- Load tools: JMeter, k6, Gatling; any tool that exports CSV with timestamps.
Benefits and Use Cases
- Saves time with a ready-to-use baseline methodology and reporting dashboard.
- Improves performance by revealing bottlenecks across CFML code, JDBC, Tomcat, and JVM.
- Reduces risk during upgrades and cloud migrations by enabling apples-to-apples comparisons.
- Facilitates capacity planning: determine safe concurrency, right-size instances, and plan auto-Scaling.
- Aids governance: maintain an Audit trail of changes, test runs, and outcomes.
- Ideal for:
- Version upgrades (CF 2018 → 2023), Java upgrades (8 → 11/17).
- Replatforming (VMs to containers, On-prem to cloud).
- Major feature releases and pre-peak readiness testing.
Troubleshooting and Tips
- If charts don’t update, confirm your pasted CSV matches the column headers and date format.
- Large variance across runs typically indicates non-isolated environments or background jobs; re-test under controlled conditions.
- High error rates with stable throughput often signal downstream saturation (DB pool, external API limits).
- Latency inflation without CPU saturation may point to thread contention, GC pauses, or I/O waits.
- Keep the workbook under Version control and tag runs with commit SHAs to tie results to code changes.
Key Takeaways
- A consistent, repeatable baseline is the foundation of ColdFusion Performance tuning.
- The XLSX workbook gives you structure: inventory, test design, data import, charts, and action tracking.
- Use percentile-driven metrics (p95/p99), not averages, to catch real user pain.
- Change one variable at a time and document everything to prove impact and avoid regressions.
FAQ
How do I download the workbook?
Use the “Download” button or link provided with this article/page. If it’s missing, contact your site admin or support to receive the XLSX via a secure link or email.
Can I use Google Sheets instead of Excel?
Yes. Import the XLSX into Google Drive and open as a Google Sheet. Some chart styling may differ slightly, but all formulas and workflows are designed to work in Sheets as well.
Which load testing tools are supported?
Any tool that can export CSV with timestamps and latency metrics works. The workbook includes mappings for JMeter and k6; Gatling exports work as well with minor column alignment.
Does it require FusionReactor or paid APM?
No. You can rely on load tool metrics plus OS/JVM telemetry (e.g., JFR, Mission Control, VisualVM). FusionReactor or similar APMs provide richer insights but are optional.
Is this only for Adobe ColdFusion?
The methodology is CFML-centric but applies to Lucee and other JVM web stacks. The sheets include ColdFusion-specific fields for convenience, yet the baseline process is platform-agnostic.
