We're thrilled to celebrate the outstanding achievements of our recent graduates who joined us in January and dedicated themselves to completing the course and their capstone projects.
Over the past three months, the talented individuals of Batch #29 (and our 1st fully online batch) undertook a diverse range of challenging projects. Their skills, enthusiasm, and commitment were evident throughout their work.
We are thankful to HP for their invaluable support, providing us with cutting-edge Z by HP workstations that empowered our students to achieve even more and contribute to their success.
We invite you to explore these inspiring examples of how our students are applying data science to uncover valuable insights, explore new frontiers, and make a meaningful impact.
Are NPUs the future?
Showcasing local AI on mobile workstations using CPU, GPU and NPU capabilities
Students: Jan Roesnick, Kristina Liang, Marisa Davis, and Natalia Neamtu
HP is a global leader in computing and innovation, developing cutting-edge technology for professionals and businesses. While the industry is investing heavily in Neural Processing Units (NPUs), real-world benchmarks and use cases remain limited. NPUs are specialized processors designed for AI and machine learning tasks. They are ideal for Natural Language Processing (NLP) and computer vision applications, reducing the load on the CPU and GPU and offering high performance. Despite their potential, many customers and partners struggle to see the benefits of upgrading to AI-powered devices or running AI workloads locally instead of in the cloud.
To address this, the team developed an application running local AI models on three different tasks - chatbot, image classification, and image segmentation - to evaluate NPU performance in AI-driven tasks, comparing it with CPUs and GPUs. The app runs on two HP laptops - the ZBook Power (Intel Core U9, RTX 3000 Ada, 64GB) and the ZBook Firefly (Intel Core U7, RTX A500, 32 GB).
The project measures utilization, offloading, elapsed time, latency and throughput across different processing units (CPU, GPU, and NPU). The findings will be showcased in the HP showroom in Geneva, allowing customers and partners to explore real-time AI performance and understand the advantages of NPUs firsthand.
To fully utilize the capabilities of a local AI workstation, the goal was to implement the solution locally, leverage state-of-the-art AI tools, and develop a dashboard to monitor system performance. As part of this effort, the team developed a range of AI tasks including natural language processing with an AI chatbot functionality, and computer vision with image classification, and object detection.
Chatbot
The chatbot is designed specifically for HP users, leveraging large language models (LLM) and retrieval-augmented generation (RAG) to provide answers about HP workstations, specs, and recommendations. The chatbot retrieves information from preloaded HP PDFs. Unlike cloud-based AI assistants, it runs entirely locally, ensuring complete data privacy.
The self-RAG model with which the chatbot was designed evaluates the relevance of the user’s question against the available source of information. This ensures the generated answer is based on accurate relevant information that matches the question. To use the chatbot, simply click the chat icon in the bottom right corner and start a conversation.
Image classification
Image classification allows a computer to recognize objects in an image and categorize them. For instance, if you upload a picture of a dog, the model predicts the breed.
Going beyond basic image recognition, there are many practical applications in various industries, such as AI-powered quality control in manufacturing, secure on-device document processing for ID verification in finance, enhanced document scanning and automation in smart offices, and X-ray or MRI analysis in medical imaging without relying on the cloud. The key metrics we are looking at here are:
- Elapsed time: Total time from start to result.
- Latency: Time per inference (The process where the AI model makes a prediction, the lower the better).
- Throughput: Number of inferences per second (the higher the better).
Object detection
Object detection goes beyond classification; it identifies multiple objects within an image. This is a key technology in computer vision applications, such as self-driving cars or security surveillance.
To evaluate performance, the app runs a benchmark test using 128 images, measuring the average processing time per image and the number of frames per second (FPS) where a higher value means smoother real-time detection.

CPU (Central Processing Unit): The CPU is a general-purpose processor designed to handle a wide variety of tasks. While it offers versatility and precision, it typically operates at a slower pace when managing large-scale parallel operations, such as those common in AI workloads.
GPU (Graphics Processing Unit): GPUs are optimized for parallel processing, making them well-suited for breaking down large tasks into smaller, simultaneous computations. This architecture is especially effective for training AI models. However, GPUs are relatively large and consume significantly more power.
NPU (Neural Processing Unit): NPUs are specialized processors built specifically for AI-related tasks. They strike a balance between the flexibility of CPUs and the parallel processing power of GPUs, while maintaining lower energy consumption. NPUs are designed to efficiently offload AI workloads from the CPU and GPU, enabling smoother performance even during complex or multitasking scenarios. This makes them a practical choice for everyday users who need AI capabilities without the high power demands of a GPU.
This project highlights the potential of Intel-powered HP AI PCs to run demanding AI tasks locally, demonstrating the growing viability of edge computing. By integrating a local LLM chatbot with RAG, image classification, object detection, and real-time resource monitoring, the app showcases how CPUs, GPUs, and NPUs perform across various AI workloads.
NPUs, in particular, offer a compelling combination of efficiency, speed, and responsiveness - making advanced AI applications accessible even on lightweight, mobile workstations. For professionals seeking performance without compromising on power consumption or data privacy, NPU-enabled devices represent a future-ready upgrade.
While challenges remain - such as broader software support and improved developer tooling - continued investment and real-world benchmarking will drive adoption. As this project demonstrates, the shift toward AI-powered workflows is well underway, and NPUs are poised to play a key role.
Watch the app in action:
Are NPU the future? - App demo - YouTube
Conclusion
As we conclude this successful session with Data Science Final Projects Group #29, we extend our sincere thanks to the students who joined us in January and dedicated themselves fully to completing the course and their final projects. Your commitment, skill, and passion for data science have been truly impressive. We wish you all the best in your future endeavors, and we are confident that you will continue to innovate and make a significant impact in your chosen fields.
For those inspired by these achievements and interested in pursuing their own data science journey, we are pleased to announce our upcoming program. Visit the
Constructor Academy website to learn more and discover how you can join the next cohort of data science innovators.