RHEL.ai Accelerators Intern
Kenny Ge
Kenneth Ge is an intern in the RHEL.ai Group (formerly known as the Kernel Accelerators Group). He has experience in a wide range of areas, including AI, GPU algorithms and optimization, desktop app development, web development, etc. He loves bringing a multidisciplinary toolbox to any problem, and is dedicated to making an impact.
Kenny Ge's contributions
GPU benchmarking and how to choose a GPU framework
Kenny Ge
This short guide explains how to choose a GPU framework and library (e.g., CUDA vs. OpenCL), as well as how to design accurate benchmarks.
Your second GPU algorithm: Quicksort
Kenny Ge
Learn how to write a GPU-accelerated quicksort procedure using the algorithm for prefix sum/scan and explore other GPU algorithms, such as Reduce and Game of Life.
Your first GPU algorithm: Scan/prefix sum
Kenny Ge
An in-depth look at a foundational GPU programming algorithm: the prefix sum. The goal is to expose the reader to the tools and language of GPU programming, rather see it only as a way to optimize certain existing subroutines.
What is GPU programming?
Kenny Ge
The first of a four-part series on introductory GPU programming, this article provides a basic overview of the GPU programming model.
GPU benchmarking and how to choose a GPU framework
This short guide explains how to choose a GPU framework and library (e.g., CUDA vs. OpenCL), as well as how to design accurate benchmarks.
Your second GPU algorithm: Quicksort
Learn how to write a GPU-accelerated quicksort procedure using the algorithm for prefix sum/scan and explore other GPU algorithms, such as Reduce and Game of Life.
Your first GPU algorithm: Scan/prefix sum
An in-depth look at a foundational GPU programming algorithm: the prefix sum. The goal is to expose the reader to the tools and language of GPU programming, rather see it only as a way to optimize certain existing subroutines.
What is GPU programming?
The first of a four-part series on introductory GPU programming, this article provides a basic overview of the GPU programming model.