How to Run a THOR HD Encoder / Decoder Test (Step-by-Step)

How to Run a THOR HD Encoder / Decoder Test (Step-by-Step)

This guide walks through a practical, repeatable procedure to test THOR HD encoder/decoder performance: setup, test parameters, execution, and interpreting results. Assumes you have access to the THOR HD encoder and decoder units, a network switch, a source video feed (pattern generator or video file), and measurement tools (bitrate monitor, packet capture, and an objective video-quality metric such as VMAF or PSNR).

1) Test objectives and quick checklist

  • Objective: Measure throughput, latency, video quality, packet loss handling, and interoperability between encoder and decoder under controlled conditions.
  • Quick checklist: THOR HD encoder, THOR HD decoder, source (SDI/HDMI or IP stream), destination display or capture device, network switch, NTP-synced clock, Ethernet cables, test signal (color bars or real content), bitrate monitor, packet capture (Wireshark), video-quality tool (VMAF/PSNR), logging system.

2) Environment and baseline configuration

  1. Place encoder and decoder on the same local network and connect both to the switch. Use direct connections for initial tests to minimize network variables.
  2. Ensure both devices run the same firmware version; note firmware/build numbers in test logs.
  3. Sync clocks via NTP to support accurate latency timestamps.
  4. Configure source to 1080p59.94 (or your target THOR HD profile) with fixed frame rate and a known test pattern (SMPTE color bars) for objective measures.
  5. Set encoder to a known baseline profile: fixed bitrate (e.g., 20 Mbps), target latency mode (e.g., low-latency), default GOP structure (e.g., IPPP or IPB depending on THOR capabilities), and disable adaptive bitrate/ABR features for initial runs. Document all settings.

3) Test cases (ordered from simple to advanced)

  1. Baseline pass-through — Encoder → Decoder on same switch, 20 Mbps, no packet loss. Validate video output and sync.
  2. Throughput sweep — Run fixed-bitrates at 5, 10, 20, 30, 50 Mbps. Measure decoder stability and output quality at each bitrate.
  3. Latency measurement — Inject a frame-accurate timecode overlay or LED pulse at source, capture timestamps at encoder input and decoder output; compute one-way latency. Repeat 30 samples and report median and 95th percentile.
  4. Packet loss resilience — Use a traffic shaper or packet loss emulator to introduce 0.1%, 0.5%, 1%, 2% random packet loss and burst losses (e.g., 10–50 ms bursts). Observe decoder behavior (freeze, frame skip, error concealment).
  5. Network jitter and reordering — Introduce latency variation (±10–100 ms) and packet reordering; verify jitter buffer behavior and how it affects latency/quality.
  6. Long-duration stability — Run a 24-hour stream at a target bitrate to observe drift, memory leaks, or overheating. Log any reboots or errors.
  7. Interoperability — Repeat basic tests with other decoders/players that support THOR HD to ensure standards compliance.

4) Execution steps (detailed)

  1. Label all devices and record MAC addresses, IPs, and firmware.
  2. Start packet capture on the network near the decoder. Capture for the duration of each test case.
  3. Begin with baseline: feed color bars, start encoder, then decoder. Confirm clean output on display/capture. Record timestamps when stream starts.
  4. For throughput sweep: change encoder bitrate, allow 60 seconds stabilization, record decoder stats (buffer fullness, dropped frames), capture VMAF/PSNR against the source reference for a 30-second clip.
  5. For latency: display a millisecond-precise timecode overlay or flash an LED visible to both capture devices; record input and output timestamps. Compute one-way latency; repeat and collect distribution.
  6. For packet loss/jitter: enable network impairment tool, run each impairment level for 3–5 minutes, and log decoder behavior and any error counters.
  7. For long run: schedule automated health polling (SNMP or REST if available) every 5 minutes and capture periodic frame grabs and CPU/temperature stats.

5) Measurements and tools

  • Video quality: VMAF (preferred), PSNR as secondary. Compare decoded video to the reference source using a frame-accurate capture.
  • Latency: Timecode overlay or LED pulse method; log samples and compute median, mean, 95th percentile.
  • Network metrics: Packet captures analyzed with Wireshark for retransmissions, RTP/RTCP stats, and packet loss patterns.
  • Device metrics: CPU, memory, temperature, and encoder/decoder internal counters (bitrate, dropped frames).
  • Logs: Centralize system logs, packet captures, and measurement outputs with timestamps.

6) Result reporting (concise)

  • Executive summary: One-paragraph outcome (pass/fail against objectives).
  • Key metrics: Baseline bitrate, median latency, 95th percentile latency, VMAF/PSNR per bitrate, packet loss threshold causing failures, uptime for long test.
  • Graphs/Artifacts: Include VMAF vs bitrate plot, latency distribution CDF, packet-loss vs quality/behaviour chart, sample frame-grabs showing artifacts.
  • Recommendations: Tunings (bitrate, GOP, jitter buffer) and any firmware or configuration changes to improve results.

7) Troubleshooting checklist

  • No video at decoder: verify network path, multicast group or unicast IP, ACLs, and encoder stream active.
  • Audio/video sync drift: confirm timestamping and PTS/DTS handling; check NTP sync.
  • Frequent rebuffering: increase jitter buffer or reduce encoder bitrate; check for burst packet loss.
  • Quality artifacts at high motion: increase bitrate or adjust encoder presets/GOP.

8) Example test log snippet (what to record)

  • Date/time: 2026-02-05 09:30 UTC
  • Devices: Encoder SN X, Decoder SN Y, Firmware v1.2.3
  • Test: Baseline 20 Mbps, 1080p59.94, color bars, duration 5 min
  • Metrics: Median latency 48 ms, 95th 72 ms, VMAF 92.1, packet loss 0%
  • Notes: Clean output; no errors in logs.

9) Final notes

  • Repeatability: keep test vectors and scripts under version control and automate runs where possible.
  • When validating field performance, repeat tests across real network topologies and with real content to complement lab results.

If you want, I can generate a printable checklist, command-line scripts for captures (tcpdump/ffmpeg), or sample VMAF analysis commands.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *