> Hello

And welcome to my portfolio.

Regards,
Eemeli Vaskelainen

> Hello

And welcome to my portfolio.

Regards,
Eemeli Vaskelainen

Selected work - Personal

Selected work - Personal

Selected work - Personal

Tailcaster | 2024

Tailcaster | 2024

Tailcaster | 2024

Bluetooth music player, full prototype product which included GUI developed from scratch and AI-based human-technology interaction

Bluetooth music player, full prototype product which included GUI developed from scratch and AI-based human-technology interaction

Click Me

Soundwaves | 2021

Soundwaves | 2021

Soundwaves | 2021

Sound visualizer software which included floating node based UI, high performance custom rendering, and video encoding.

Sound visualizer software which included floating node based UI, high performance custom rendering, and video encoding.

Click Me

Selected work - Professional

Selected work - Professional

Selected work - Professional

Data Collection Solutions | 2023 - 2025

Data Collection Solutions | 2023 - 2025

Data Collection Solutions
2023 - 2025

I developed various AI training data collection solutions for our client, where Nvidia Jetson was our target hardware. Data was collected for example from sensors like cameras and LIDARs.

In this project I worked for example with: Python, C++, CUDA, DDS, CAN, RTSP, MQTT, MCAP, ROS2, Docker, Linux.

I developed various AI training data collection solutions for our client, where Nvidia Jetson was our target hardware. Data was collected for example from sensors like cameras and LIDARs.

In this project I worked for example with: Python, C++, CUDA, DDS, CAN, RTSP, MQTT, MCAP, ROS2, Docker, Linux.

Industrial Edge AI Migration | 2024

Industrial Edge AI Migration | 2024

Industrial Edge AI Migration
2024

During the proof-of-concept phase of the project, I contributed by identifying and implementing necessary changes to be done to existing solution developed for a another target. In the productization phase I helped our client to migrate developed software to be run now in Nvidia Jetson.

In this project I worked for example with: Python, PyTorch, CUDA, Docker, AMQP, REST, Linux

During the proof-of-concept phase of the project, I contributed by identifying and implementing necessary changes to be done to existing solution developed for a another target. In the productization phase I helped our client to migrate developed software to be run now in Nvidia Jetson.

In this project I worked for example with: Python, PyTorch, CUDA, Docker, AMQP, REST, Linux

Reinforcement Learning Template | 2024

Reinforcement Learning Template | 2024

Reinforcement Learning Template
2024

Internal project where I developed containerized reinforcement learning template to be used as a foundation for simulation-based AI training. Template showed how to train neural network using double Q-learning for solving autonomous decision-making task. Template was later successfully applied in more complex setting.

In this project I worked for example with: PyTorch, CUDA, Python, Docker

Internal project where I developed containerized reinforcement learning template to be used as a foundation for simulation-based AI training. Template showed how to train neural network using double Q-learning for solving autonomous decision-making task. Template was later successfully applied in more complex setting.

In this project I worked for example with: PyTorch, CUDA, Python, Docker

Containerized Software For Robotics | 2022 - 2024

Containerized Software For Robotics | 2022 - 2024

Containerized Software For Robotics
2022 - 2024

Internal project where the focus was to increase expertise in the context of robotics and autonomous vehicles. During project, I developed various containerized software solutions and proof of concepts. Link to related blog post.

In this project I worked for example with: ROS2, Docker, Python, C++, O3DE, AWS, Jenkins, Linux

Internal project where the focus was to increase expertise in the context of robotics and autonomous vehicles. During project, I developed various containerized software solutions and proof of concepts. Link to related blog post.

In this project I worked for example with: ROS2, Docker, Python, C++, O3DE, AWS, Jenkins, Linux

Tailcaster

Tailcaster project started from thought to develop something that I can use often. This turned out to be bluetooth music player. Besides this I wanted to make the player futuristic looking and chose to add display with half transparent mirror on top of it. Furthermore I wanted to add something technologically cool and developed "air touch" for interacting with the player.

Tailcaster name for the project comes from the tail of wires always connected to the whole.

In this project I worked for example with: C++, Python, GLSL, OpenGL, X11, MediaPipe, ZMQ, Protobuf, Bluetooth, Yocto, PyTorch, Linux

From the bellow you can find break down of the project.

Tailcaster project started from thought to develop something that I can use often. This turned out to be bluetooth music player. Besides this I wanted to make the player futuristic looking and chose to add display with half transparent mirror on top of it. Furthermore I wanted to add something technologically cool and developed "air touch" for interacting with the player.

Tailcaster name for the project comes from the tail of wires always connected to the whole.

In this project I worked for example with: C++, Python, GLSL, OpenGL, X11, MediaPipe, ZMQ, Protobuf, Bluetooth, Yocto, PyTorch, Linux

From the bellow you can find break down of the project.

Hardware

Hardware

Hardware

After trying different options, Raspberry Pi 5 combined with Hifiberry AMP4 Pro and Camera Module 2 turned out to be perfect and compact combination with enough performance and quality capabilities.

After trying different options, Raspberry Pi 5 combined with Hifiberry AMP4 Pro and Camera Module 2 turned out to be perfect and compact combination with enough performance and quality capabilities.

Bluetooth Audio

Bluetooth Audio

Bluetooth Audio

Utilizing pi's build-in bluetooth with bluealsa, pi was turned into bluetooth speaker. Furthermore, using bluez and developed "bluetooth host", bluetooth events could be listened and send via dbus now turning speaker into "player".

Utilizing pi's build-in bluetooth with bluealsa, pi was turned into bluetooth speaker. Furthermore, using bluez and developed "bluetooth host", bluetooth events could be listened and send via dbus now turning speaker into "player".

GUI

GUI

GUI

GUI with basic bluetooth player features and functionalities was developed from scratch. Smooth animations and text rendering turned out to be the most challenging part of the implementation.

GUI with basic bluetooth player features and functionalities was developed from scratch. Smooth animations and text rendering turned out to be the most challenging part of the implementation.

Air Touch

Applied pretrained hand gesture recognization model to make "air touch" based interaction possible. This worked out surprisingly well considering limited computational capabilities of pi.

Applied pretrained hand gesture recognization model to make "air touch" based interaction possible. This worked out surprisingly well considering limited computational capabilities of pi.

Covers

After a lot of trial and error, 3D printable model was ready after around 30 printing sessions. PLA turned out be fitting material choice for each part after better air flow was implemented to the design.

After a lot of trial and error, 3D printable model was ready after around 30 printing sessions. PLA turned out be fitting material choice for each part after better air flow was implemented to the design.

Back To Top

Soundwaves

Soundwaves

Soundwaves project was made as a showcase for applying for developer jobs back in 2021. Main learning's from the project were how to implement full scale software, good software architecture practices, how to implement custom rendering, and why smooth UX is important.

In this project I worked for example with: C++, GLSL, OpenGL, ImGUI, AVC

Soundwaves project was made as a showcase for applying for developer jobs back in 2021. Main learning's from the project were how to implement full scale software, good software architecture practices, how to implement custom rendering, and why smooth UX is important.

In this project I worked for example with: C++, GLSL, OpenGL, ImGUI, AVC

Visualization outcome

Visualization outcome

Floating node based UI

Floating node based UI

Following this section is the "heart" of the project, fragment shader which still today is the most complex piece of code I have written.

Based on the current playback time, the shader reads audio spectrum mapped to GPU's memory and does calculations for rendering extremely smooth lines for creating the visualization as shown in the video.

Performance with this shader was very good, running the visualization in real-time 60 frames a second was totally doable with mediocre GPU. For 256 lanes (as in the video) this meant running the following fragment shader up to 31,850,496,000 times every second.

I can recommend book of shaders for learning fascinating world of shaders.

Following this section is the "heart" of the project, fragment shader which still today is the most complex piece of code I have written.

Based on the current playback time, the shader reads audio spectrum mapped to GPU's memory and does calculations for rendering extremely smooth lines for creating the visualization as shown in the video.

Performance with this shader was very good, running the visualization in real-time 60 frames a second was totally doable with mediocre GPU. For 256 lanes (as in the video) this meant running the following fragment shader up to 31,850,496,000 times every second.

I can recommend book of shaders for learning fascinating world of shaders.

#version 460 core

layout (location = 0) in vec3 position;
layout (location = 1) flat in int id; // 'flat' required
layout (location = 0) out vec4 FragColor;

layout (std430, binding = 2) buffer canvas_stack_fs
{
    int u_ring_count;
    float u_song_time;
    float u_flow_scale;
    float u_thickness;
    vec4 padding;
    vec4 u_hline_color;
};

const float PI = 3.141592653;
const float thickness = 0.010;
const int lane_count = 256;
const int block_count = 16;

uniform samplerBuffer u_tbo_tex[block_count];

float random(float x)
{
    return fract(sin(x + 1000.0) * 100000.0);
}

void main()
{    
    // Pixel coordinates
    vec2 pixel_position = position.xy;
    // Current pixel
    vec2 st = pixel_position;

    // Offset to the middle
    st.x -= 0.5;

    // Flow speed affects how much can be seen
    st.x *= 1.0 / (u_flow_scale); // Multiply by negative for inverse flow

    // Data length * point density = total time in seconds
    float total_time = u_ring_count * 0.016;

    // Distance represents song time
    // Always from "start 0.0 to end 1.0"
    float distance = u_song_time / total_time;

    // Current pixel where distance is applied
    vec2 sts = st;
    sts.x += distance;

    // Apply sliding effect to the current pixel
    st.x += distance;
    st = fract(st);

    // Current ring calculated from sts
    int ring_n = int(floor(sts.x * float(u_ring_count)));

    // Lane index from the total lane count
    int lane_n = id;
    
    int block_lane_count = lane_count / block_count;

    int block_n = int(floor(float(lane_n) / float(block_lane_count)));
    // Lane index within the block lane count
    int block_lane_n = lane_n - (block_lane_count * block_n);

    // Calculate location
    int n = block_lane_n * u_ring_count + ring_n;

    // SRGB
    n *= 4;

    // Fetch audio data
    vec2 audio_texel = vec2(texelFetch(u_tbo_tex[block_n], n).r, texelFetch(u_tbo_tex[block_n], n + 4).r);
    float height_a = audio_texel.x;
    float height_b = audio_texel.y;

    // Noise calculations
    int nr = int(floor(st.x));
    float l = fract(st.x);

    float noise = mix(random(float(nr + lane_n * 10)), random(float(nr + 1 + lane_n * 10)), smoothstep(0.0, 1.0, l));

    // Apply noise to height
    height_a = (height_a + (sin(u_song_time + noise * 10.0) * noise) * 0.05);
    height_b = (height_b + (sin(u_song_time + noise * 10.0) * noise) * 0.05);

    // Space between each point
    float gap = 1.0 / float(u_ring_count);

    // Position x of two points next to each other
    float x0 = gap * float(ring_n);
    float x1 = gap * (float(ring_n) + 1.0);

    // Line calculations
    vec2 pa = st - vec2(x0, height_a);
    vec2 ba = vec2(x1, height_b) - vec2(x0, height_a);
    float h = clamp(dot(pa, ba) / dot(ba, ba), -80.0, 80.0); // High clamp value fixes visual bug

    float dist = length(pa - ba * h);

    // Line appearance adjustments
    vec2 len = vec2(x1 - x0, abs(audio_texel.x - audio_texel.y));
    float ratio = len.y / len.x;
    float side = dist * ratio;

    dist = sqrt(dist * dist + side * side);

    // No need to draw if outside of the line (optimization)
    if (dist > u_thickness)
    {
        discard;
    }

    // Actual line segment calculation
    float line_segment = smoothstep(0.0, u_thickness, dist);

    // Background alpha increases when approaching "zero" height
    float bg_alpha = log(audio_texel.x + 1.0) * 8.0;

    // Distance from edges
    float f = sin(pixel_position.x * PI) * 3.0;
    bg_alpha *= clamp(f, 0.0, 1.0);

    // Texel fetch color data
    vec3 color_texel = vec3
    (
        texelFetch(u_tbo_tex[block_n], n + 1).r, 
        texelFetch(u_tbo_tex[block_n], n + 2).r, 
        texelFetch(u_tbo_tex[block_n], n + 3).r
    );

    // Apply background alpha
    float color_r = color_texel.r * bg_alpha;
    float color_g = color_texel.g * bg_alpha;
    float color_b = color_texel.b * bg_alpha;
    float color_a = 0.1;

    // Mix final color
    vec4 color_final = mix(vec4(color_r, color_g, color_b, color_a), vec4(0.0, 0.0, 0.0, 0.0), line_segment);

    // Middle (hearing) line draw
    if (pixel_position.x > 0.499 && pixel_position.x <

#version 460 core

layout (location = 0) in vec3 position;
layout (location = 1) flat in int id; // 'flat' required
layout (location = 0) out vec4 FragColor;

layout (std430, binding = 2) buffer canvas_stack_fs
{
    int u_ring_count;
    float u_song_time;
    float u_flow_scale;
    float u_thickness;
    vec4 padding;
    vec4 u_hline_color;
};

const float PI = 3.141592653;
const float thickness = 0.010;
const int lane_count = 256;
const int block_count = 16;

uniform samplerBuffer u_tbo_tex[block_count];

float random(float x)
{
    return fract(sin(x + 1000.0) * 100000.0);
}

void main()
{    
    // Pixel coordinates
    vec2 pixel_position = position.xy;
    // Current pixel
    vec2 st = pixel_position;

    // Offset to the middle
    st.x -= 0.5;

    // Flow speed affects how much can be seen
    st.x *= 1.0 / (u_flow_scale); // Multiply by negative for inverse flow

    // Data length * point density = total time in seconds
    float total_time = u_ring_count * 0.016;

    // Distance represents song time
    // Always from "start 0.0 to end 1.0"
    float distance = u_song_time / total_time;

    // Current pixel where distance is applied
    vec2 sts = st;
    sts.x += distance;

    // Apply sliding effect to the current pixel
    st.x += distance;
    st = fract(st);

    // Current ring calculated from sts
    int ring_n = int(floor(sts.x * float(u_ring_count)));

    // Lane index from the total lane count
    int lane_n = id;
    
    int block_lane_count = lane_count / block_count;

    int block_n = int(floor(float(lane_n) / float(block_lane_count)));
    // Lane index within the block lane count
    int block_lane_n = lane_n - (block_lane_count * block_n);

    // Calculate location
    int n = block_lane_n * u_ring_count + ring_n;

    // SRGB
    n *= 4;

    // Fetch audio data
    vec2 audio_texel = vec2(texelFetch(u_tbo_tex[block_n], n).r, texelFetch(u_tbo_tex[block_n], n + 4).r);
    float height_a = audio_texel.x;
    float height_b = audio_texel.y;

    // Noise calculations
    int nr = int(floor(st.x));
    float l = fract(st.x);

    float noise = mix(random(float(nr + lane_n * 10)), random(float(nr + 1 + lane_n * 10)), smoothstep(0.0, 1.0, l));

    // Apply noise to height
    height_a = (height_a + (sin(u_song_time + noise * 10.0) * noise) * 0.05);
    height_b = (height_b + (sin(u_song_time + noise * 10.0) * noise) * 0.05);

    // Space between each point
    float gap = 1.0 / float(u_ring_count);

    // Position x of two points next to each other
    float x0 = gap * float(ring_n);
    float x1 = gap * (float(ring_n) + 1.0);

    // Line calculations
    vec2 pa = st - vec2(x0, height_a);
    vec2 ba = vec2(x1, height_b) - vec2(x0, height_a);
    float h = clamp(dot(pa, ba) / dot(ba, ba), -80.0, 80.0); // High clamp value fixes visual bug

    float dist = length(pa - ba * h);

    // Line appearance adjustments
    vec2 len = vec2(x1 - x0, abs(audio_texel.x - audio_texel.y));
    float ratio = len.y / len.x;
    float side = dist * ratio;

    dist = sqrt(dist * dist + side * side);

    // No need to draw if outside of the line (optimization)
    if (dist > u_thickness)
    {
        discard;
    }

    // Actual line segment calculation
    float line_segment = smoothstep(0.0, u_thickness, dist);

    // Background alpha increases when approaching "zero" height
    float bg_alpha = log(audio_texel.x + 1.0) * 8.0;

    // Distance from edges
    float f = sin(pixel_position.x * PI) * 3.0;
    bg_alpha *= clamp(f, 0.0, 1.0);

    // Texel fetch color data
    vec3 color_texel = vec3
    (
        texelFetch(u_tbo_tex[block_n], n + 1).r, 
        texelFetch(u_tbo_tex[block_n], n + 2).r, 
        texelFetch(u_tbo_tex[block_n], n + 3).r
    );

    // Apply background alpha
    float color_r = color_texel.r * bg_alpha;
    float color_g = color_texel.g * bg_alpha;
    float color_b = color_texel.b * bg_alpha;
    float color_a = 0.1;

    // Mix final color
    vec4 color_final = mix(vec4(color_r, color_g, color_b, color_a), vec4(0.0, 0.0, 0.0, 0.0), line_segment);

    // Middle (hearing) line draw
    if (pixel_position.x > 0.499 && pixel_position.x <