Member-only story

FlameGraph Htop — Benchmarking CPU— Linux

Rakesh M

--

I have written a small post on what happens at a Process-Level, now let's throw some flame into it with flame-graphs

Am a fan of Brendan Gregg's work and his writings and flame graph tool are his contribution to the open-source community — https://www.brendangregg.com/flamegraphs.html

Before moving into Flamegraph, let's understand some Benchmarking concepts.

Benchmarking in general is a methodology to test resource limits and regressions in a controlled environment. Now there are two types of benchmarking

  • Micro-Benchmarking — Uses small and artificial workloads
  • Macro-Benchmarking — Simulates client in part or total client workloads

Most Benchmarking scenario results boil down to the price/performance ratio. It can slowly start with an intention to provide proof-of-concept testing to test application/system load to identify bottlenecks in the system for troubleshooting or enhancing the system or to know about the maximum stress system simply is capable of taking.

Enterprise / On-premises Benchmarking: let’s take a simple scenario to build out a data centre which has huge racks of networking and computing equipment. As Data-centre builds are mostly identical and mirrored, benchmarking before going for Purchase-order is critical.

Cloud-based Benchmarking: This is a really in-expensive setup. While a vendor like AWS has many compute instance types, it’s easier to…

--

--

No responses yet

Write a response