


Hello,
And welcome to my portfolio.
Regards,
Eemeli Vaskelainen
Hello,
And welcome to my portfolio.
Regards,
Eemeli Vaskelainen
Selected Professional Work
Selected Professional Work
Selected Professional Work
Data Bridge | 2025
Data Bridge | 2025
Data Bridge 
2025
Development of data bridge with capability to serve numerous sensors at the edge of the network connecting them to cloud backend now establishing two-way concurrent flow of encrypted data with high reliability.
In this project I worked for example with: Python, Bleak, BLE, BlueZ, MQTT, Linux
Development of data bridge with capability to serve numerous sensors at the edge of the network connecting them to cloud backend now establishing two-way concurrent flow of encrypted data with high reliability.
In this project I worked for example with: Python, Bleak, BLE, BlueZ, MQTT, Linux
Data Collection Solutions | 2023 - 2025
Data Collection Solutions | 2023 - 2025
Data Collection Solutions 
2023 - 2025
Development of various AI training data collection solutions for a client, where Nvidia Jetson was the target hardware. Data was collected from various sensors including cameras and LIDARs. 
In this project I worked for example with: Python, C++, CUDA, DDS, CAN, RTSP, MQTT, MCAP, ROS2, Docker, Azure, Linux
Development of various AI training data collection solutions for a client, where Nvidia Jetson was the target hardware. Data was collected from various sensors including cameras and LIDARs. 
In this project I worked for example with: Python, C++, CUDA, DDS, CAN, RTSP, MQTT, MCAP, ROS2, Docker, Azure, Linux
Edge AI Migration | 2024
Edge AI Migration | 2024
Edge AI Migration
2024
During the proof-of-concept phase of the project, I identified and implement necessary changes to be done to existing solution developed for another target. In the productization phase I helped our client to migrate developed software to be run now in Nvidia Jetson. 
In this project I worked for example with: Python, PyTorch, CUDA, Docker, AMQP, REST, Linux
During the proof-of-concept phase of the project, I identified and implement necessary changes to be done to existing solution developed for another target. In the productization phase I helped our client to migrate developed software to be run now in Nvidia Jetson. 
In this project I worked for example with: Python, PyTorch, CUDA, Docker, AMQP, REST, Linux
Reinforcement Learning Template | 2024
Reinforcement Learning Template | 2024
Reinforcement Learning Template
2024
Development of reinforcement learning template to be used as a foundation for simulation-based AI training. Template laid foundation for training neural network using double Q-learning to solve autonomous decision-making tasks.
In this project I worked for example with: PyTorch, CUDA, Python, Docker
Development of reinforcement learning template to be used as a foundation for simulation-based AI training. Template laid foundation for training neural network using double Q-learning to solve autonomous decision-making tasks.
In this project I worked for example with: PyTorch, CUDA, Python, Docker
Software For Robotics | 2022 - 2024
Software For Robotics | 2022 - 2024
Software For Robotics
2022 - 2024
Internal project where the focus was to increase expertise in the context of robotics and autonomous vehicles. During project, I contributed by developing various containerized software solutions and proof-of-concepts. 
In this project I worked for example with: ROS2, Docker, Python, C++, O3DE, AWS, Jenkins, Linux
Internal project where the focus was to increase expertise in the context of robotics and autonomous vehicles. During project, I contributed by developing various containerized software solutions and proof-of-concepts. 
In this project I worked for example with: ROS2, Docker, Python, C++, O3DE, AWS, Jenkins, Linux
Selected Side Projects
Selected Side Projects
Selected Side Projects
Tailcaster | 2024
Tailcaster | 2024
Tailcaster | 2024
Bluetooth music player, concept product which included GUI developed from scratch together with computer vision based interaction.
Bluetooth music player, concept product which included GUI developed from scratch together with computer vision based interaction.
Learn More
Soundwaves | 2021
Soundwaves | 2021
Soundwaves | 2021
Sound visualizer software which included floating node-based UI, high performance custom rendering, and video encoding.
Sound visualizer software which included floating node-based UI, high performance custom rendering, and video encoding.
Learn More
Tailcaster
Tailcaster project started from idea to create something I would actually use regularly,  a side project that wouldn’t be forgotten once the development has ended. Out of different possibilities, bluetooth music player became the target. The developed player features a display covered with a half-transparent mirror combined with computer vision based interaction method as demonstrated in the video.
Tailcaster name for the project was given from the tail of wires always connected to the whole. 
In this project I worked for example with: C++, Python, GLSL, OpenGL, X11, MediaPipe, ZMQ, Protobuf, Bluetooth, Yocto, PyTorch, Linux
From the bellow you can find break down of the project:
Tailcaster project started from idea to create something I would actually use, a side project that wouldn’t be forgotten once the development has ended. Out of different possibilities, bluetooth music player became the target. The developed player features a display covered with a half-transparent mirror combined with computer vision based interaction method “air touch” as demonstrated in the video.
Tailcaster name for the project was given from the tail of wires always connected to the whole. 
In this project I worked for example with: C++, Python, GLSL, OpenGL, X11, MediaPipe, ZMQ, Protobuf, Bluetooth, Yocto, PyTorch, Linux
From the bellow you can find break down of the project.
Hardware
Hardware
Hardware
After trying different options, Raspberry Pi 5 combined with Hifiberry AMP4 Pro and Camera Module 2 turned out to be perfect and compact combination with enough performance and quality capabilities.
After trying different options, Raspberry Pi 5 combined with Hifiberry AMP4 Pro and Camera Module 2 turned out to be perfect and compact combination with enough performance and quality capabilities.



Bluetooth Audio
Bluetooth Audio
Bluetooth Audio
Utilizing pi's build-in bluetooth with bluealsa, pi was turned into bluetooth speaker. Furthermore, using BlueZ and developed bluetooth host, bluetooth events could be listened and send via dbus now turning speaker into player.
Utilizing pi's build-in bluetooth with bluealsa, pi was turned into bluetooth speaker. Furthermore, using BlueZ and developed bluetooth host, bluetooth events could be listened and send via dbus now turning speaker into player.



GUI
GUI
GUI
GUI with basic bluetooth player features and functionalities was developed from scratch. Smooth animations and text rendering turned out to be the most challenging part of the GUI implementation.
GUI with basic bluetooth player features and functionalities was developed from scratch. Smooth animations and text rendering turned out to be the most challenging part of the GUI implementation.



Air Touch
Applied and optimized pretrained hand gesture recognization model to make "air touch" interaction possible. This worked out surprisingly well considering limited computational capabilities of pi.
Applied and optimized pretrained hand gesture recognization model to make "air touch" interaction possible. This worked out surprisingly well considering limited computational capabilities of pi.



Covers
After a lot of trial and error, 3D printable model was ready after around 30 printing sessions. PLA turned out be fitting material choice for each part after better air flow was implemented to the design.
After a lot of trial and error, 3D printable model was ready after around 30 printing sessions. PLA turned out be fitting material choice for each part after better air flow was implemented to the design.



Soundwaves
Soundwaves
Soundwaves project was made as a showcase project for applying for developer jobs back in 2021. Project combined sound visualizer with node-based editor. Main learning's from the project were how to implement full scale software, good software architecture practices, how to implement custom rendering, and why smooth UX is important.
In this project I worked for example with: C++, GLSL, OpenGL, ImGUI, AVC
Soundwaves project was made as a showcase project for applying for developer jobs back in 2021. Project combined sound visualizer with node-based editor. Main learning's from the project were how to implement full scale software, good software architecture practices, how to implement custom rendering, and why smooth UX is important.
In this project I worked for example with: C++, GLSL, OpenGL, ImGUI, AVC
Visualization outcome
Visualization outcome



Floating node based UI
Floating node based UI
Fragment Shader
Fragment Shader
Following this section is the heart of the soundwaves project, fragment shader which still today is one of the most challenging pieces of code I have written. 
Based on the current playback time, the shader reads audio spectrum mapped to GPU's memory and does calculations for rendering extremely smooth lines for creating the visualization as shown in the video. 
Performance with this shader was very good, running the visualization in real-time 60 frames a second was totally doable with mediocre GPU (2021). For 256 lanes (as in the video) this meant running the following fragment shader up to 31,850,496,000 times every second. 
I can recommend book of shaders for learning fascinating world of shaders.
Following this section is the heart of the soundwaves project, fragment shader which still today is one of the most challenging pieces of code I have written. 
Based on the current playback time, the shader reads audio spectrum mapped to GPU's memory and does calculations for rendering extremely smooth lines for creating the visualization as shown in the video. 
Performance with this shader was very good, running the visualization in real-time 60 frames a second was totally doable with mediocre GPU (2021). For 256 lanes (as in the video) this meant running the following fragment shader up to 31,850,496,000 times every second. 
I can recommend book of shaders for learning fascinating world of shaders.
#version 460 core
layout (location = 0) in vec3 position;
layout (location = 1) flat in int id; // 'flat' required
layout (location = 0) out vec4 FragColor;
layout (std430, binding = 2) buffer canvas_stack_fs
{
    int u_ring_count;
    float u_song_time;
    float u_flow_scale;
    float u_thickness;
    vec4 padding;
    vec4 u_hline_color;
};
const float PI = 3.141592653;
const float thickness = 0.010;
const int lane_count = 256;
const int block_count = 16;
uniform samplerBuffer u_tbo_tex[block_count];
float random(float x)
{
    return fract(sin(x + 1000.0) * 100000.0);
}
void main()
{    
    // Pixel coordinates
    vec2 pixel_position = position.xy;
    // Current pixel
    vec2 st = pixel_position;
    // Offset to the middle
    st.x -= 0.5;
    // Flow speed affects how much can be seen
    st.x *= 1.0 / (u_flow_scale); // Multiply by negative for inverse flow
    // Data length * point density = total time in seconds
    float total_time = u_ring_count * 0.016;
    // Distance represents song time
    // Always from "start 0.0 to end 1.0"
    float distance = u_song_time / total_time;
    // Current pixel where distance is applied
    vec2 sts = st;
    sts.x += distance;
    // Apply sliding effect to the current pixel
    st.x += distance;
    st = fract(st);
    // Current ring calculated from sts
    int ring_n = int(floor(sts.x * float(u_ring_count)));
    // Lane index from the total lane count
    int lane_n = id;
    
    int block_lane_count = lane_count / block_count;
    int block_n = int(floor(float(lane_n) / float(block_lane_count)));
    // Lane index within the block lane count
    int block_lane_n = lane_n - (block_lane_count * block_n);
    // Calculate location
    int n = block_lane_n * u_ring_count + ring_n;
    // SRGB
    n *= 4;
    // Fetch audio data
    vec2 audio_texel = vec2(texelFetch(u_tbo_tex[block_n], n).r, texelFetch(u_tbo_tex[block_n], n + 4).r);
    float height_a = audio_texel.x;
    float height_b = audio_texel.y;
    // Noise calculations
    int nr = int(floor(st.x));
    float l = fract(st.x);
    float noise = mix(random(float(nr + lane_n * 10)), random(float(nr + 1 + lane_n * 10)), smoothstep(0.0, 1.0, l));
    // Apply noise to height
    height_a = (height_a + (sin(u_song_time + noise * 10.0) * noise) * 0.05);
    height_b = (height_b + (sin(u_song_time + noise * 10.0) * noise) * 0.05);
    // Space between each point
    float gap = 1.0 / float(u_ring_count);
    // Position x of two points next to each other
    float x0 = gap * float(ring_n);
    float x1 = gap * (float(ring_n) + 1.0);
    // Line calculations
    vec2 pa = st - vec2(x0, height_a);
    vec2 ba = vec2(x1, height_b) - vec2(x0, height_a);
    float h = clamp(dot(pa, ba) / dot(ba, ba), -80.0, 80.0); // High clamp value fixes visual bug
    float dist = length(pa - ba * h);
    // Line appearance adjustments
    vec2 len = vec2(x1 - x0, abs(audio_texel.x - audio_texel.y));
    float ratio = len.y / len.x;
    float side = dist * ratio;
    dist = sqrt(dist * dist + side * side);
    // No need to draw if outside of the line (optimization)
    if (dist > u_thickness)
    {
        discard;
    }
    // Actual line segment calculation
    float line_segment = smoothstep(0.0, u_thickness, dist);
    // Background alpha increases when approaching "zero" height
    float bg_alpha = log(audio_texel.x + 1.0) * 8.0;
    // Distance from edges
    float f = sin(pixel_position.x * PI) * 3.0;
    bg_alpha *= clamp(f, 0.0, 1.0);
    // Texel fetch color data
    vec3 color_texel = vec3
    (
        texelFetch(u_tbo_tex[block_n], n + 1).r, 
        texelFetch(u_tbo_tex[block_n], n + 2).r, 
        texelFetch(u_tbo_tex[block_n], n + 3).r
    );
    // Apply background alpha
    float color_r = color_texel.r * bg_alpha;
    float color_g = color_texel.g * bg_alpha;
    float color_b = color_texel.b * bg_alpha;
    float color_a = 0.1;
    // Mix final color
    vec4 color_final = mix(vec4(color_r, color_g, color_b, color_a), vec4(0.0, 0.0, 0.0, 0.0), line_segment);
    // Middle (hearing) line draw
    if (pixel_position.x > 0.499 && pixel_position.x <
#version 460 core
layout (location = 0) in vec3 position;
layout (location = 1) flat in int id; // 'flat' required
layout (location = 0) out vec4 FragColor;
layout (std430, binding = 2) buffer canvas_stack_fs
{
    int u_ring_count;
    float u_song_time;
    float u_flow_scale;
    float u_thickness;
    vec4 padding;
    vec4 u_hline_color;
};
const float PI = 3.141592653;
const float thickness = 0.010;
const int lane_count = 256;
const int block_count = 16;
uniform samplerBuffer u_tbo_tex[block_count];
float random(float x)
{
    return fract(sin(x + 1000.0) * 100000.0);
}
void main()
{    
    // Pixel coordinates
    vec2 pixel_position = position.xy;
    // Current pixel
    vec2 st = pixel_position;
    // Offset to the middle
    st.x -= 0.5;
    // Flow speed affects how much can be seen
    st.x *= 1.0 / (u_flow_scale); // Multiply by negative for inverse flow
    // Data length * point density = total time in seconds
    float total_time = u_ring_count * 0.016;
    // Distance represents song time
    // Always from "start 0.0 to end 1.0"
    float distance = u_song_time / total_time;
    // Current pixel where distance is applied
    vec2 sts = st;
    sts.x += distance;
    // Apply sliding effect to the current pixel
    st.x += distance;
    st = fract(st);
    // Current ring calculated from sts
    int ring_n = int(floor(sts.x * float(u_ring_count)));
    // Lane index from the total lane count
    int lane_n = id;
    
    int block_lane_count = lane_count / block_count;
    int block_n = int(floor(float(lane_n) / float(block_lane_count)));
    // Lane index within the block lane count
    int block_lane_n = lane_n - (block_lane_count * block_n);
    // Calculate location
    int n = block_lane_n * u_ring_count + ring_n;
    // SRGB
    n *= 4;
    // Fetch audio data
    vec2 audio_texel = vec2(texelFetch(u_tbo_tex[block_n], n).r, texelFetch(u_tbo_tex[block_n], n + 4).r);
    float height_a = audio_texel.x;
    float height_b = audio_texel.y;
    // Noise calculations
    int nr = int(floor(st.x));
    float l = fract(st.x);
    float noise = mix(random(float(nr + lane_n * 10)), random(float(nr + 1 + lane_n * 10)), smoothstep(0.0, 1.0, l));
    // Apply noise to height
    height_a = (height_a + (sin(u_song_time + noise * 10.0) * noise) * 0.05);
    height_b = (height_b + (sin(u_song_time + noise * 10.0) * noise) * 0.05);
    // Space between each point
    float gap = 1.0 / float(u_ring_count);
    // Position x of two points next to each other
    float x0 = gap * float(ring_n);
    float x1 = gap * (float(ring_n) + 1.0);
    // Line calculations
    vec2 pa = st - vec2(x0, height_a);
    vec2 ba = vec2(x1, height_b) - vec2(x0, height_a);
    float h = clamp(dot(pa, ba) / dot(ba, ba), -80.0, 80.0); // High clamp value fixes visual bug
    float dist = length(pa - ba * h);
    // Line appearance adjustments
    vec2 len = vec2(x1 - x0, abs(audio_texel.x - audio_texel.y));
    float ratio = len.y / len.x;
    float side = dist * ratio;
    dist = sqrt(dist * dist + side * side);
    // No need to draw if outside of the line (optimization)
    if (dist > u_thickness)
    {
        discard;
    }
    // Actual line segment calculation
    float line_segment = smoothstep(0.0, u_thickness, dist);
    // Background alpha increases when approaching "zero" height
    float bg_alpha = log(audio_texel.x + 1.0) * 8.0;
    // Distance from edges
    float f = sin(pixel_position.x * PI) * 3.0;
    bg_alpha *= clamp(f, 0.0, 1.0);
    // Texel fetch color data
    vec3 color_texel = vec3
    (
        texelFetch(u_tbo_tex[block_n], n + 1).r, 
        texelFetch(u_tbo_tex[block_n], n + 2).r, 
        texelFetch(u_tbo_tex[block_n], n + 3).r
    );
    // Apply background alpha
    float color_r = color_texel.r * bg_alpha;
    float color_g = color_texel.g * bg_alpha;
    float color_b = color_texel.b * bg_alpha;
    float color_a = 0.1;
    // Mix final color
    vec4 color_final = mix(vec4(color_r, color_g, color_b, color_a), vec4(0.0, 0.0, 0.0, 0.0), line_segment);
    // Middle (hearing) line draw
    if (pixel_position.x > 0.499 && pixel_position.x <